Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a new Elixir/Phoenix Feedreader: Ash-modeled domain (feeds, entries, users/tokens), Oban scheduler/workers to fetch and upsert feed entries, Phoenix LiveView UI with streams/pagination and authentication, full configs/migrations/assets, tests, and developer docs/tooling. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant Client as LiveView Client
participant Server as Phoenix Server
participant DB as SQLite DB
participant Oban as Oban Scheduler
participant Worker as FetchFeed Worker
participant HTTP as RSS/Atom
participant PubSub as Phoenix.PubSub
Note over Oban: Cron triggers scheduler
Oban->>Server: invoke Scheduler.perform
Server->>DB: list_feeds()
DB-->>Server: feeds
loop per due feed
Server->>Oban: enqueue FetchFeed (staggered)
end
Worker->>HTTP: GET feed URL
HTTP-->>Worker: XML or error
alt success (200 + XML)
Worker->>Worker: parse_feed(XML)
Worker->>DB: upsert entries (Core.upsert_from_feed)
DB-->>Worker: inserted/updated
Worker->>PubSub: broadcast "entry:created"
Worker->>DB: log_fetch_success
else error
Worker->>DB: log_fetch_error
end
PubSub->>Client: entry:created
Client->>Client: update stream / show toast
sequenceDiagram
actor User
participant Browser as Browser/DOM
participant LVServer as EntryLive.Index
participant DB as Ash+SQLite
participant DOM as LiveView Stream
User->>Browser: navigate to "/"
Browser->>LVServer: mount() / handle_params()
LVServer->>DB: Core.list_unread(limit:50)
DB-->>LVServer: entries
LVServer->>DOM: stream(:entries, entries)
DOM-->>Browser: render entries
Browser->>LVServer: phx-click "toggle_read"
LVServer->>DB: Core.toggle_read(entry_id)
DB-->>LVServer: updated_entry
alt entry removed from unread
LVServer->>DOM: stream_delete(:entries, entry)
else entry updated in place
LVServer->>DOM: stream_insert(:entries, entry)
end
DOM-->>Browser: update UI
Browser->>LVServer: phx-click "load_more"
LVServer->>DB: Core.list_unread(limit:50, offset:50)
DB-->>LVServer: next_entries
LVServer->>DOM: stream(:entries, next_entries, at: -1)
DOM-->>Browser: append entries
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Poem
✨ Finishing Touches🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 18
🧹 Nitpick comments (25)
mise.toml (1)
1-3: Consider pinning specific versions for reproducibility.Using
latestforerlangandelixirmeans different developers or CI runs may resolve different versions, potentially causing inconsistent builds or subtle compatibility issues. Pinning to specific versions (e.g.,erlang = '27.0',elixir = '1.17.0') ensures reproducible environments.🔧 Suggested change
[tools] -erlang = 'latest' -elixir = 'latest' +erlang = '27.0' +elixir = '1.17.0'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mise.toml` around lines 1 - 3, Replace the non-deterministic 'latest' tool pins in mise.toml by specifying concrete version strings for the Erlang and Elixir keys (replace erlang = 'latest' and elixir = 'latest' with pinned versions like erlang = '27.0' and elixir = '1.17.0' or your chosen stable releases) so CI and developer machines resolve identical runtimes; update the erlang and elixir entries to the exact version numbers you want to standardize on.feedreader/priv/repo/migrations/20260324172014_add_created_at.exs (1)
8-42: Consider wrapping table operations in a transaction and adding existence guards.SQLite DDL statements are auto-committed outside explicit transactions, and the migration assumes
entriesexists. If the migration partially fails or runs on an unexpected state, recovery becomes difficult.A few improvements to consider:
- The
DROP TABLE entries(Line 30) will fail if the table doesn't exist- Consider using
DROP TABLE IF EXISTSfor safer idempotency- Wrapping in an explicit transaction ensures atomicity
🔧 Suggested safer approach
- execute("DROP TABLE entries") + execute("DROP TABLE IF EXISTS entries")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/priv/repo/migrations/20260324172014_add_created_at.exs` around lines 8 - 42, Wrap the whole migration body in an explicit transaction inside def up and make destructive DDL idempotent: replace the bare "DROP TABLE entries" execute call with "DROP TABLE IF EXISTS entries" and ensure the ALTER/INSERT steps only run when the original entries table exists (e.g., check sqlite_master or guard before the INSERT INTO entries_new ... SELECT ... FROM entries); keep CREATE TABLE IF NOT EXISTS for entries_new and the CREATE INDEX calls as-is, and commit/rollback the transaction around these execute(...) blocks to ensure atomicity if any step fails.AGENTS.md (1)
1-5: LGTM!Clear documentation for contributors about the project migration context.
Minor clarification: Line 5 mentions "mise for erlang/elixir deps" — mise manages runtime versions (Erlang/Elixir installations), while
mixmanages package dependencies. Consider rewording to "mise for erlang/elixir version management and developer tasks" for accuracy.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@AGENTS.md` around lines 1 - 5, Update the wording that currently says "mise for erlang/elixir deps and developer tasks" to accurately state that "mise" handles Erlang/Elixir version management and developer tasks and that "mix" is used to manage package dependencies (e.g., change the phrase to "mise for Erlang/Elixir version management and developer tasks; use mix for package dependencies"). Ensure the term "mise" and the clarification about "mix" are used exactly so contributors understand the distinction.feedreader/lib/feedreader_web/components/time_helpers.ex (1)
8-21: Consider handlingNaiveDateTimeinputs.If any date fields come from the database as
NaiveDateTime(without timezone), this function will fail to match. Depending on your Ecto/Ash configuration, you may want to add a clause or convert toDateTimefirst.♻️ Optional: Add NaiveDateTime clause
def humanize_date(%DateTime{} = dt) do now = DateTime.utc_now() diff_seconds = DateTime.diff(now, dt, :second) # ... end + + def humanize_date(%NaiveDateTime{} = ndt) do + ndt + |> DateTime.from_naive!("Etc/UTC") + |> humanize_date() + end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/lib/feedreader_web/components/time_helpers.ex` around lines 8 - 21, The current humanize_date/1 only matches %DateTime{} and will crash on %NaiveDateTime{} inputs; add a new clause like humanize_date(%NaiveDateTime{} = ndt) that converts the naive datetime to a DateTime (e.g., DateTime.from_naive(ndt, "Etc/UTC") or DateTime.from_naive(ndt, :utc)), handle the {:ok, dt} and {:error, _} cases, and then delegate to the existing humanize_date(DateTime) logic (or provide a sensible fallback) so NaiveDateTime values are supported.feedreader/priv/resource_snapshots/repo/entries/20260324171514_dev.json (1)
97-113: Consider specifyingon_deletebehavior for thefeed_idforeign key.The foreign key reference has
"on_delete": null, meaning the database won't enforce referential integrity when a feed is deleted. This could leave orphaned entries. Consider settingon_delete: :delete(cascade) oron_delete: :restrictin the Ash resource relationship definition.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/priv/resource_snapshots/repo/entries/20260324171514_dev.json` around lines 97 - 113, The foreign key "entries_feed_id_fkey" for the "source": "feed_id" currently has "on_delete": null; update the Ash resource relationship that defines the feed_id association (the relationship referencing feeds via destination_attribute "id" or the relationship named feed/feed_id in the Entries resource) to explicitly set an on_delete behavior (e.g., on_delete: :delete or on_delete: :restrict) so the generated DB migration/metadata includes a non-null on_delete value and prevents orphaned entries.feedreader/lib/feedreader/workers/scheduler.ex (2)
26-28: Consider usingOban.insert/1to prevent one failure from stopping all job scheduling.
Oban.insert!/1raises on failure, which would halt the loop and prevent remaining feeds from being scheduled. UsingOban.insert/1with error logging would be more resilient.♻️ Proposed fix
%{feed_id: feed.id} |> FeedReader.Workers.FetchFeed.new(schedule_in: delay_minutes * 60) - |> Oban.insert!() + |> Oban.insert() + |> case do + {:ok, _job} -> :ok + {:error, reason} -> + require Logger + Logger.warning("Failed to schedule fetch for feed #{feed.id}: #{inspect(reason)}") + end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/lib/feedreader/workers/scheduler.ex` around lines 26 - 28, Replace the panic-on-failure Oban.insert! call inside the scheduling loop with a non-raising Oban.insert call and handle both outcomes so one failing insert doesn't stop scheduling remaining feeds: call FeedReader.Workers.FetchFeed.new(schedule_in: delay_minutes * 60) as before but use Oban.insert(...) and pattern-match the result {:ok, job} to proceed and {:error, changeset_or_error} to Logger.error with context (include feed.id and the error/changeset) so failures are logged but the loop continues; update the code around the Oban.insert! invocation in the scheduler function to implement this handling.
26-28: Consider adding uniqueness constraint to prevent duplicate fetch jobs.If the scheduler runs while previous
FetchFeedjobs are still pending, duplicate jobs may be queued for the same feed. Oban's:uniqueoption can prevent this.♻️ Proposed fix
%{feed_id: feed.id} - |> FeedReader.Workers.FetchFeed.new(schedule_in: delay_minutes * 60) + |> FeedReader.Workers.FetchFeed.new( + schedule_in: delay_minutes * 60, + unique: [period: 600, keys: [:feed_id]] + ) |> Oban.insert()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/lib/feedreader/workers/scheduler.ex` around lines 26 - 28, The scheduler can enqueue duplicate FetchFeed jobs; update the job creation in FeedReader.Workers.FetchFeed.new used here to include Oban uniqueness options so jobs are deduplicated by feed_id (e.g., add a unique option keyed on the feed identifier and a reasonable period), then call Oban.insert! as before; specifically modify the call that builds the job (FeedReader.Workers.FetchFeed.new(schedule_in: ...)) to include Oban's unique configuration (keys/period) so duplicate fetches for the same feed are prevented.feedreader/lib/feedreader_web/live/entry_live/index.html.heex (3)
27-36: Consider extracting metadata rendering to a helper function.The embedded Elixir block mixing assignments with
condoutput is functional but harder to test in isolation. A helper function would improve readability and testability.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/lib/feedreader_web/live/entry_live/index.html.heex` around lines 27 - 36, Extract the metadata rendering into a view helper (e.g., entry_meta/1 or entry_meta(feed, published_at)) so the template no longer contains the inline assignments and cond; move the logic that calls feed_display_name(entry.feed) and humanize_date(entry.published_at) into that helper, have it return the final string or safe HTML (handling the three cases: both present, only feed_name, only date_str), and replace the cond block in the template with a single call to the helper (e.g., <%= entry_meta(entry) %>) so the logic is testable in isolation.
16-16: Defensive nil-check on@streamsmay mask initialization bugs.The pattern
(@streams&&@streams.entries) || []suggests the stream might not be initialized. Streams should always be set up inmount/3before render. If this guard is needed, consider investigating why@streamscould be nil rather than silently falling back.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/lib/feedreader_web/live/entry_live/index.html.heex` at line 16, The template uses a defensive nil-check "(`@streams` && `@streams.entries`) || []" which masks why `@streams` isn't set; remove that fallback and ensure `@streams` is assigned in mount/3 (or the relevant LiveView/LiveComponent init) so the template can simply iterate over `@streams.entries`. Locate the LiveView mount/3 (or the call that assigns :streams) and initialize assigns[:streams] to a structure with an entries enumerable (e.g., %{entries: []} or the actual stream source) before rendering so the template loop can safely use `@streams.entries` directly.
14-77: Load-more button placement inside stream container may cause unexpected behavior.The
load-more-containerdiv (line 69) is placed inside the#entriesdiv which hasphx-update="stream". Stream containers expect all children to be stream items managed by LiveView's stream operations. Static elements like the load-more button may be removed or duplicated during stream updates.Move the load-more control outside the stream container:
♻️ Proposed fix
- <div id="entries" phx-update="stream" class="space-y-4"> + <div id="entries-container" class="space-y-4"> + <div id="entries" phx-update="stream"> <div :for={{id, entry} <- (`@streams` && `@streams.entries`) || []} id={id} class="bg-base-200/80 rounded-lg border border-base-300/50 p-4 backdrop-blur-sm" > <!-- ... entry content ... --> </div> + </div> <div :if={`@has_more`} id="load-more-container" class="text-center py-4"> <!-- ... load more button ... --> </div> <div :if={!@has_more && `@entry_count` == 0} id="empty-state" class="text-center py-8 text-base-content/60" > <!-- ... empty state ... --> </div> </div>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/lib/feedreader_web/live/entry_live/index.html.heex` around lines 14 - 77, The Load More control (div id="load-more-container" and button id="load-more-btn" which triggers the "load_more" phx event) is currently inside the stream container (`#entries` with phx-update="stream") and can be removed/duplicated by LiveView stream updates; move the entire load-more-container block out of the div that has phx-update="stream" so it sits after the closing tag for the `#entries` stream container, ensuring the stream only contains stream items (the :for block producing {id, entry}) and the load-more button remains a static control outside the stream.feedreader/test/support/conn_case.ex (1)
13-15: Documentation mentions PostgreSQL but this project uses SQLite.The moduledoc references PostgreSQL async tests, but the project uses SQLite (based on the migration comment mentioning "sqlite migrations"). Consider updating the documentation to reflect the actual database being used.
📝 Proposed fix
Finally, if the test case interacts with the database, we enable the SQL sandbox, so changes done to the database - are reverted at the end of every test. If you are using - PostgreSQL, you can even run database tests asynchronously - by setting `use FeedreaderWeb.ConnCase, async: true`, although - this option is not recommended for other databases. + are reverted at the end of every test. Note that SQLite does + not support async database tests, so `async: true` should not + be used with tests that interact with the database.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/test/support/conn_case.ex` around lines 13 - 15, Update the module documentation in FeedreaderWeb.ConnCase (the `@moduledoc` text) to reference SQLite instead of PostgreSQL and adjust the guidance about async tests accordingly (e.g., "Because this project uses SQLite, running database tests asynchronously via `use FeedreaderWeb.ConnCase, async: true` is not recommended"). Locate the `@moduledoc` in conn_case.ex and replace the PostgreSQL-specific sentence with one that accurately reflects the project's SQLite migrations and async-testing recommendation.feedreader/assets/js/app.js (1)
28-28: Consider defensive check for CSRF token meta tag.If the
csrf-tokenmeta tag is missing,querySelectorreturnsnullandgetAttributewill throw. While unlikely in a properly configured Phoenix app, a defensive check prevents cryptic errors.Add defensive check
-const csrfToken = document.querySelector("meta[name='csrf-token']").getAttribute("content") +const csrfTokenEl = document.querySelector("meta[name='csrf-token']") +const csrfToken = csrfTokenEl ? csrfTokenEl.getAttribute("content") : null🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/assets/js/app.js` at line 28, The code assumes the meta tag exists and calls getAttribute directly on document.querySelector("meta[name='csrf-token']"), which will throw if null; update the initialization of csrfToken to first query the element into a variable (e.g., const csrfMeta = document.querySelector("meta[name='csrf-token']")) and then defensively check csrfMeta before calling getAttribute (falling back to null or an empty string and optionally logging a warning) so csrfToken is safe to use elsewhere.feedreader/lib/feedreader/workers/fetch_feed.ex (1)
49-51: Consider movingimport SweetXmlto module level.Importing inside a function works but is unconventional. Moving it to the module level improves readability and follows Elixir conventions.
Move import to module level
defmodule FeedReader.Workers.FetchFeed do use Oban.Worker, queue: :default require Logger + import SweetXml alias FeedReader.CoreThen remove line 50 (
import SweetXml) from insideparse_feed/1.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/lib/feedreader/workers/fetch_feed.ex` around lines 49 - 51, Move the import of SweetXml out of the parse_feed/1 function to the module level: add import SweetXml at the top of the module (so functions like parse_feed/1 can use SweetXml helpers without importing inside the function) and remove the inline import SweetXml from inside the parse_feed(body) function; ensure no other local imports collide when you place it at module scope.feedreader/mise.toml (2)
41-46: Missing trailing newline at EOF.The file is missing a trailing newline, which is a POSIX convention and can cause issues with some tools.
Add trailing newline
depends = [ "format_check", "lint", "security", "test" ] +🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/mise.toml` around lines 41 - 46, The file mise.toml is missing a trailing newline at EOF which breaks POSIX conventions; open the file (look for the depends = [...] array block) and ensure the file ends with a single newline character by adding a newline after the closing bracket so the EOF is terminated properly.
1-3: Consider pinning specific tool versions for reproducibility.Using
"latest"forelixiranderlangcan lead to inconsistent builds across developer machines and CI environments. Consider pinning to specific versions (e.g.,elixir = "1.17.0",erlang = "27.0").🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/mise.toml` around lines 1 - 3, The current mise.toml uses floating versions ("latest") for the tools keys elixir and erlang which breaks reproducibility; update the elixir and erlang entries in mise.toml to pinned semver strings (for example change elixir = "latest" to a specific version like elixir = "1.17.0" and erlang = "27.0") so all developers and CI use the same toolchain; pick and verify the exact versions your project supports and replace the "latest" values accordingly.feedreader/lib/feedreader_web/live/entry_live/index.ex (1)
18-26: Redundant:entriesassign when using streams.Line 22 assigns
:entriesdirectly, but line 26 also streams the entries. When using LiveView streams, the plain assign is redundant and unused in templates. Consider removing it to avoid confusion.Remove redundant assign
{:noreply, socket |> assign(:action, action) |> assign(:current_path, uri.path) - |> assign(:entries, entries) |> assign(:entry_count, length(entries)) |> assign(:offset, offset) |> assign(:has_more, has_more) |> stream(:entries, entries, reset: true)}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/lib/feedreader_web/live/entry_live/index.ex` around lines 18 - 26, The code assigns entries twice: a plain assign(:entries, entries) and a LiveView stream via stream(:entries, entries, reset: true); remove the redundant assign(:entries, entries) so only the stream is used (locate the block that pipes socket through assign(:entries, entries) and stream(:entries, entries, reset: true) and delete the assign call), ensuring templates and helpers reference the streamed collection via the LiveView stream API rather than the removed plain assign.feedreader/priv/repo/migrations/20260324122843_add_core_resources.exs (1)
42-56: Unreachable code afterraiseindown/0.The
raiseon lines 47-49 halts execution unconditionally, making lines 51-55 dead code that will never run. While this migration was autogenerated byash_sqlite.generate_migrations, consider either:
- Removing the unreachable statements, or
- Replacing
raisewithIO.warn/2if you want the rollback to proceed with a warning.Option 1: Remove unreachable code
def down do drop_if_exists unique_index(:entries, [:feed_id, :external_id], name: "entries_unique_entry_per_feed_index" ) raise "SQLite does not support dropping foreign key constraints. " <> "You will need to manually recreate the `entries` table without the `entries_feed_id_fkey` constraint. " <> "See https://www.techonthenet.com/sqlite/foreign_keys/drop.php for guidance." - - drop table(:entries) - - drop_if_exists unique_index(:feeds, [:feed_url], name: "feeds_unique_feed_url_index") - - drop table(:feeds) end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/priv/repo/migrations/20260324122843_add_core_resources.exs` around lines 42 - 56, The down/0 migration contains an unconditional raise which halts execution and makes the subsequent drop/drop_if_exists lines (e.g., drop table(:entries), drop_if_exists unique_index(:feeds, ...), drop table(:feeds)) unreachable; to fix, either remove those dead statements so down/0 only raises with the explanatory message about entries_feed_id_fkey, or replace the raise call with IO.warn/2 (preserving the same message) so the migration emits a warning and then continues to execute the remaining cleanup statements in down/0.feedreader/config/config.exs (2)
82-82: Hardcoded LiveView signing salt.The
signing_saltis hardcoded in the config file. While this works for development, ensure this is overridden inruntime.exsfor production with a securely generated value from environment variables or secrets management.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/config/config.exs` at line 82, The LiveView signing_salt is hardcoded under the live_view config (signing_salt) — replace this by loading a secure value from environment/secrets at runtime: remove or leave a dev-only default in config.exs and add code in runtime.exs to read an env var (e.g. LIVE_VIEW_SIGNING_SALT) and set the live_view signing_salt accordingly, failing fast or logging an error if the value is missing in production; update any references to live_view/signing_salt to use the runtime-provided value.
12-23: Redundant notifier configuration for single-node SQLite.
Oban.Notifiers.PGis designed for distributed/clustered deployments and not needed for a single-node SQLite setup. The Lite engine relies on polling (stage_interval, default 1s) for job discovery and will function fine without an explicit notifier. Removing it will eliminate unnecessary overhead and avoid "isolated" status log messages.Suggested fix: Remove unused notifier
config :feedreader, Oban, engine: Oban.Engines.Lite, - notifier: Oban.Notifiers.PG, queues: [default: 10, scheduler: 5], repo: Feedreader.Repo, plugins: [🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/config/config.exs` around lines 12 - 23, The Oban config currently sets notifier: Oban.Notifiers.PG which is unnecessary for a single-node SQLite deployment using Oban.Engines.Lite; remove the notifier: Oban.Notifiers.PG entry from the Oban config block so the Lite engine relies on its polling mechanism (stage_interval) and avoid redundant notifier setup and "isolated" logs—update the config where Oban is configured (look for the Oban, engine: Oban.Engines.Lite block) to simply omit the notifier key.feedreader/test/feedreader_web/live/entry_live_test.exs (1)
105-126: Consider asserting loaded entries afterload_moreclick.The test verifies the event is handled without error but doesn't assert that additional entries are actually loaded. A stronger test would verify the entry count increased or that previously hidden entries are now visible.
Suggested improvement
# Click the load more button - render_click(view, "load_more") + html = render_click(view, "load_more") - # After clicking, button may or may not be visible depending on pagination - # The important thing is the event is handled without error + # Verify more entries are now visible + assert html =~ "Entry 51" or html =~ "Entry 52" end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/test/feedreader_web/live/entry_live_test.exs` around lines 105 - 126, The test currently only ensures the load_more event is handled; after calling render_click(view, "load_more") capture the updated rendered HTML (via render(view) or the return value of render_click) and assert additional entries are present by either (a) comparing the count of entry elements before and after the click (e.g., count occurrences of the entry selector used in your app such as ".entry" or the exact entry markup) or (b) asserting for the presence of a later entry title like "Entry 60" that was previously not visible; use the existing test helpers (has_element?/render/view) and the same setup that uses Core.upsert_from_feed! to validate that entries increased after invoking render_click(view, "load_more").feedreader/test/feedreader/core/entry_test.exs (2)
46-62: Consider adding test for toggling read from true to false.The
toggle_starredtests cover both directions (false→true and true→false), buttoggle_readonly tests one direction. For consistency and complete coverage, consider adding a test case for toggling from true to false.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/test/feedreader/core/entry_test.exs` around lines 46 - 62, Add a complementary test that verifies Core.toggle_read/1 flips is_read from true to false: create or upsert an entry with is_read: true (using Core.upsert_from_feed! as in the existing test), call Core.toggle_read(entry), assert the result is {:ok, updated} and assert updated.is_read == false; place this alongside the existing "toggles is_read from false to true" test in the same describe block to mirror the toggle_starred coverage.
105-134: Test doesn't fully validate the filtering behavior.The test description says "returns only unread entries" but the test creates an entry with
is_read: true, then immediately toggles it back tois_read: false, so both entries end up unread. This doesn't actually verify that read entries are filtered out.Consider keeping one entry as read and asserting it's excluded:
♻️ Proposed fix
describe "unread/0" do test "returns only unread entries", %{feed: feed} do _entry1 = Core.upsert_from_feed!(%{ external_id: "unread-1", title: "Unread Entry", content_link: "https://example.com/unread1", feed_id: feed.id }) - entry2 = + _entry2 = Core.upsert_from_feed!(%{ external_id: "read-1", title: "Read Entry", content_link: "https://example.com/read1", feed_id: feed.id, is_read: true }) - assert entry2.is_read == true - - {:ok, updated_entry2} = Core.toggle_read(entry2) - - assert updated_entry2.is_read == false - {:ok, results} = Core.list_unread() - assert length(results.results) == 2 + assert length(results.results) == 1 + assert hd(results.results).title == "Unread Entry" end end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/test/feedreader/core/entry_test.exs` around lines 105 - 134, The test creates two entries but toggles the "read" entry back to unread, so it doesn't verify filtering; update the test in entry_test.exs so one entry remains read and one unread (use Core.upsert_from_feed! to create an entry with is_read: true and do not call Core.toggle_read on that entry), then call Core.list_unread() and assert that only the unread entry is returned (e.g., length(results.results) == 1 and that the returned IDs/titles do not include the originally read entry) to confirm read entries are excluded.feedreader/lib/feedreader_web/components/layouts.ex (1)
35-38: Unusedcurrent_scopeattribute.The
current_scopeattribute is declared but never used in theapp/1component template. Consider removing it if not needed, or implement scope-based UI logic if intended.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/lib/feedreader_web/components/layouts.ex` around lines 35 - 38, The attr(:current_scope, :map) declared for the component is unused in the app/1 component template; either remove the attribute declaration or wire it into the app/1 template to implement scope-based UI logic (e.g., use current_scope to conditionally render navigation or scope-specific classes). Locate the attr(:current_scope, :map) declaration and the app/1 component definition and either delete the unused attr or add the necessary pattern matching/assign usage of current_scope inside app/1 (e.g., reference assigns.current_scope or accept it as a parameter) so the attribute is actually consumed.feedreader/lib/feedreader/core.ex (2)
30-39: Error details are discarded during import.When
add_feedfails, the actual error reason is discarded (line 34). For debugging failed OPML imports, consider logging the error or returning accumulated error details.♻️ Proposed enhancement
results = Enum.map(feeds, fn feed_attrs -> case add_feed(feed_attrs) do {:ok, feed} -> {:ok, feed} - {:error, _} -> {:error, feed_attrs} + {:error, reason} -> + require Logger + Logger.warning("Failed to import feed #{feed_attrs.feed_url}: #{inspect(reason)}") + {:error, feed_attrs} end end)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/lib/feedreader/core.ex` around lines 30 - 39, The code currently discards the error reason returned by add_feed; update the Enum.map block so the error branch preserves the original reason (e.g., change the clause in the map from {:error, _} -> {:error, feed_attrs} to {:error, reason} -> {:error, {feed_attrs, reason}}) so callers of the function (and any OPML import flow) receive detailed failure info; optionally also add a process_logger/error_log call inside the {:error, reason} clause if you want immediate logging of the failure.
4-12: Consider movingimport SweetXmlto module level and simplifying the default logic.
The
import SweetXmlinside the function is unusual. While it works, placing it at the module level is more idiomatic Elixir.Line 11's
|| "Imported"is redundant sincexpathwith thesmodifier returns an empty string (notnil) when the attribute is missing. Line 12 already handles the empty string case.♻️ Proposed simplification
defmodule FeedReader.Core do use Ash.Domain + import SweetXml def import_opml(opml_content) do - import SweetXml - categories = opml_content |> xpath(~x"//body/outline"l) feeds = Enum.flat_map(categories, fn category -> - category_name = xpath(category, ~x"@text"s) || "Imported" + category_name = xpath(category, ~x"@text"s) category_name = if category_name != "", do: category_name, else: "Imported"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/lib/feedreader/core.ex` around lines 4 - 12, Move the `import SweetXml` out of the import_opml/1 function to the top of the module (module-level import) and simplify the defaulting logic inside import_opml: remove the redundant `|| "Imported"` after `xpath(category, ~x"@text"s)` because the `s` modifier returns an empty string, and keep the existing empty-string check `if category_name != "", do: category_name, else: "Imported"` to supply "Imported". Update references in import_opml (variables `categories`, `category_name`, and the `xpath` calls) accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@feedreader/AGENTS.md`:
- Line 103: The markdown list at the bullet describing synchronization is
mis-indented causing MD005; adjust the indentation of the line that starts with
"Instead of sleeping..." so it matches sibling bullets (same number of leading
spaces) and keep the content intact, ensuring the inline code `_ =
:sys.get_state/1` remains formatted as inline code; locate the bullet in
AGENTS.md and realign it to the same indentation level as the other list items.
- Line 5: The command in the setup docs contains a typo: replace the incorrect
"misa run pre-commit" with "mise run pre-commit" so it matches the pre-commit
task defined in mise.toml; update the AGENTS.md line where "misa run pre-commit"
appears to use "mise" instead to avoid failing contributor commands.
In `@feedreader/lib/feedreader_web/components/layouts.ex`:
- Around line 134-164: The theme_toggle component renders three icon-only
buttons (see function theme_toggle and the buttons using
phx-click={JS.dispatch("phx:set-theme")} with
data-phx-theme="system"/"light"/"dark") and needs accessible labels; update each
button to include either an aria-label (e.g. aria-label="Set theme to
system"/"light"/"dark") or a visually-hidden text node (e.g. a span with your
app's sr-only class containing "System"/"Light"/"Dark") so screen readers can
announce the purpose of each button while preserving the existing icons and
phx:set-theme behavior.
In `@feedreader/lib/feedreader_web/live/entry_live/index.ex`:
- Line 99: The current recalculation using assign(:has_more,
socket.assigns.has_more && new_count >= 50) can incorrectly flip has_more to
false after a deletion; update the logic in the component handling pagination
(the assign call that references socket.assigns.has_more and new_count) so it
either preserves the previous has_more value unless you explicitly re-check the
server, or perform a fresh count/query to determine whether more pages exist.
Concretely, either keep socket.assigns.has_more when it was true (don’t AND it
with new_count) or call your backend/count function (e.g., fetch_total_count or
the existing listing query) and set has_more based on total_count > page_size
rather than new_count alone.
- Around line 69-83: The root_domain/1 function mis-combines host parts by
reversing parts but then using List.first(parts) from the original list,
producing wrong domains (e.g., "example.www"); fix by consistently using the
reversed list to build the domain: after doing rev = Enum.reverse(parts) (or
pattern-matching on Enum.reverse(parts) as [tld, second | _]), construct
"#{second}.#{tld}" (and keep the single-host and fallback branches intact) so
the function uses the top-level domain and second-level domain from the reversed
parts rather than the original parts list.
In `@feedreader/lib/feedreader_web/live/feed_live/index.ex`:
- Around line 65-74: The handler for the "delete_feed" event uses bang functions
Core.get_feed! and Core.delete_feed! which will crash the LiveView on
stale/invalid ids; change to the safe variants (Core.get_feed/1 and
Core.delete_feed/1 or delete_feed when given a struct) and handle nil/{:error,
_} results: fetch the feed with Core.get_feed(id), if nil set a flash error via
put_flash and return {:noreply, socket} without changing assigns; if the fetch
succeeds attempt Core.delete_feed(feed) and on {:ok, _} reload feeds
(Core.list_feeds!) and put_flash(:info, "Feed deleted"), otherwise
put_flash(:error, "Unable to delete feed") and return {:noreply, socket}.
- Around line 1-2: The FeedreaderWeb.FeedLive.Index LiveView is currently
exposed without authentication; add the LiveView on-mount auth hook by
registering on_mount {FeedreaderWeb.LiveUserAuth, :live_user_required} on the
module (i.e. modify the module declaration for FeedreaderWeb.FeedLive.Index to
include the on_mount tuple) so mount/handle_params are only reachable by
authenticated users; alternatively, ensure the router moves the /feeds live
route into an authenticated live_session if you prefer session-level protection
instead of per-module on_mount.
- Around line 7-15: The mount/3 currently assigns :form and :feeds but doesn't
enable LiveView uploads, so the handler that pattern-matches %{path: path} will
never receive an uploaded file; call allow_upload(:opml, accept: ~w(.opml
text/xml application/xml), max_entries: 1) inside mount/3 (alongside existing
assigns) and update the import handler to call consume_uploaded_entries(socket,
:opml, fn %{path: tmp_path}, _entry -> tmp_path end) (or similar) to get the
uploaded file path(s) instead of pattern-matching %{path: path} from params so
the OPML file is actually received and processed.
In `@feedreader/lib/feedreader_web/live/feed_live/index.html.heex`:
- Around line 73-89: The file input is using a regular <input type="file"> which
only sends a filename; update LiveView to use LiveView uploads by calling
allow_upload(:opml, ...) in mount/3, replace the plain input with a
live_file_input using upload={`@uploads.opml`} in the template (referencing
`@uploads.opml` and the component <.live_file_input>), and update the
"import_opml" event handler to read files via consume_uploaded_entries/3
(processing the consumed entry paths) instead of expecting an upload map with
{path: path}; ensure IDs/names still match the form and validate_opml uses
uploaded entries for validation.
In `@feedreader/lib/feedreader/accounts/user.ex`:
- Around line 1-51: Feedreader.Accounts.User is missing authentication
strategies and attributes required by the router’s magic link and the
log_out_everywhere add-on: update the authentication block to declare the needed
strategies (e.g. include :magic_link and, if using passwords, a password
strategy) and configure the magic link strategy settings, and add the missing
attributes to the resource (at minimum an email attribute and, if using password
auth or log_out_everywhere, a password_hash attribute or the password strategy
configuration). Locate the authentication block in Feedreader.Accounts.User and
add strategies [:magic_link, ...] with appropriate strategy config (magic link
settings) and add attributes like attribute :email, :string, allow_nil?: false
(unique as needed) and attribute :password_hash, :string (or wire up the
AshAuthentication password strategy) so the magic_sign_in_route and
log_out_everywhere add-on will function.
In `@feedreader/lib/feedreader/core/entry.ex`:
- Around line 18-33: The three read actions (read :unread, read :starred, read
:history) currently use offset pagination; change their pagination lines to use
keyset pagination by replacing "pagination offset?: true, default_limit: 50"
with "pagination keyset?: true, default_limit: 50" in each action (keep the
existing prepare build sort orders and default_limit). Ensure keyset pagination
is applied for read :unread, read :starred, and read :history so they match
SPEC.md and the infinite-scroll UX.
- Around line 55-69: The upsert_from_feed action currently accepts user-managed
fields is_read and is_starred, which risks overwriting user state; remove
is_read and is_starred from the accept list in the create :upsert_from_feed
block so the action only accepts feed metadata (external_id, title,
content_link, comments_link, published_at, feed_id) and keep upsert? true and
upsert_identity :unique_entry_per_feed unchanged; locate the create
:upsert_from_feed definition in entry.ex and delete the is_read and is_starred
entries from its accept array.
In `@feedreader/lib/feedreader/workers/fetch_feed.ex`:
- Around line 64-81: The Atom link is stored in a href attribute, but the code
in fetch_feed.ex currently uses xpath(item, ~x"./link/text()"s) so Atom entries
yield empty link and external_id falls back to guid; update the link extraction
(the binding named link used when building external_id) to try attribute-based
XPaths first (e.g. ./link/@href and ./link[`@rel`="alternate"]/@href) and then
fall back to ./link/text(), ensuring the xpath call(s) on the item node return
the href when present so external_id uses the real Atom link; adjust the logic
around the link and external_id bindings accordingly.
- Around line 164-176: The parse_tz_offset functions are treating the numeric
portion (e.g. "0500") as minutes rather than HHMM; change them to parse the
integer from rest, split it into hours and minutes (hours = div(value, 100),
minutes = rem(value, 100)), compute total_minutes = hours * 60 + minutes, and
return total_minutes (negated for the "-" clause). Update both
parse_tz_offset("+" <> rest) and parse_tz_offset("-" <> rest) to use this
hours/minutes decomposition and keep the fallback to 0 on parse failure.
In `@feedreader/lib/feedreader/workers/scheduler.ex`:
- Around line 22-24: The comment and implementation disagree: the comment says
"0-5 minutes" but delay_minutes is set with :rand.uniform(5) (which yields
1..5). Fix by either updating the comment to "1-5 minutes" or change the code
that sets delay_minutes to produce 0..5, e.g. replace :rand.uniform(5) with
:rand.uniform(6) - 1 so delay_minutes yields 0..5; adjust the surrounding code
in the scheduler where delay_minutes is used if needed.
In `@feedreader/priv/repo/migrations/20260324172014_add_created_at.exs`:
- Around line 24-28: The migration currently sets created_at for all existing
rows to the same timestamp via datetime('now'); update the SQL in the
execute(...) that INSERTs into entries_new (selecting from entries) so the
created_at column uses COALESCE(published_at, datetime('now')) instead of
datetime('now') — i.e., when selecting for the created_at value in the SELECT of
the INSERT INTO entries_new (id, ..., created_at) FROM entries, replace
datetime('now') with COALESCE(published_at, datetime('now')) to preserve entry
age when available.
In `@feedreader/test/feedreader_web/components/time_helpers_test.exs`:
- Around line 12-13: The tests in
feedreader_web/components/time_helpers_test.exs are flaky because each assertion
calls DateTime.utc_now() separately and small execution delays shift the
computed bucket; update the four assertions that compare
TimeHelpers.humanize_date(...) to exact strings ("30m ago", "5h ago", "3d ago",
"2w ago") to use Regex.match?/2 instead so the test accepts the computed bucket
or its adjacent bucket (e.g., allow "30m" or "29m", "5h" or "4h", "3d" or
"2d"/"3d", "2w" or "1w"/"2w" as appropriate); locate the failing assertions
referring to TimeHelpers.humanize_date and replace the exact equality asserts
with Regex.match? checks using patterns that accept the expected bucket ±1 unit.
In `@SPEC.md`:
- Around line 1-204: There are two near-identical spec files causing
duplication: remove the repository-root SPEC.md and keep the canonical spec at
feedreader/SPEC.md; ensure you delete the top-level SPEC.md file, confirm
feedreader/SPEC.md contains the intended content (including the trailing
newline), and normalize line endings (e.g., LF) so both files would be identical
before removal to avoid accidental divergence.
---
Nitpick comments:
In `@AGENTS.md`:
- Around line 1-5: Update the wording that currently says "mise for
erlang/elixir deps and developer tasks" to accurately state that "mise" handles
Erlang/Elixir version management and developer tasks and that "mix" is used to
manage package dependencies (e.g., change the phrase to "mise for Erlang/Elixir
version management and developer tasks; use mix for package dependencies").
Ensure the term "mise" and the clarification about "mix" are used exactly so
contributors understand the distinction.
In `@feedreader/assets/js/app.js`:
- Line 28: The code assumes the meta tag exists and calls getAttribute directly
on document.querySelector("meta[name='csrf-token']"), which will throw if null;
update the initialization of csrfToken to first query the element into a
variable (e.g., const csrfMeta =
document.querySelector("meta[name='csrf-token']")) and then defensively check
csrfMeta before calling getAttribute (falling back to null or an empty string
and optionally logging a warning) so csrfToken is safe to use elsewhere.
In `@feedreader/config/config.exs`:
- Line 82: The LiveView signing_salt is hardcoded under the live_view config
(signing_salt) — replace this by loading a secure value from environment/secrets
at runtime: remove or leave a dev-only default in config.exs and add code in
runtime.exs to read an env var (e.g. LIVE_VIEW_SIGNING_SALT) and set the
live_view signing_salt accordingly, failing fast or logging an error if the
value is missing in production; update any references to live_view/signing_salt
to use the runtime-provided value.
- Around line 12-23: The Oban config currently sets notifier: Oban.Notifiers.PG
which is unnecessary for a single-node SQLite deployment using
Oban.Engines.Lite; remove the notifier: Oban.Notifiers.PG entry from the Oban
config block so the Lite engine relies on its polling mechanism (stage_interval)
and avoid redundant notifier setup and "isolated" logs—update the config where
Oban is configured (look for the Oban, engine: Oban.Engines.Lite block) to
simply omit the notifier key.
In `@feedreader/lib/feedreader_web/components/layouts.ex`:
- Around line 35-38: The attr(:current_scope, :map) declared for the component
is unused in the app/1 component template; either remove the attribute
declaration or wire it into the app/1 template to implement scope-based UI logic
(e.g., use current_scope to conditionally render navigation or scope-specific
classes). Locate the attr(:current_scope, :map) declaration and the app/1
component definition and either delete the unused attr or add the necessary
pattern matching/assign usage of current_scope inside app/1 (e.g., reference
assigns.current_scope or accept it as a parameter) so the attribute is actually
consumed.
In `@feedreader/lib/feedreader_web/components/time_helpers.ex`:
- Around line 8-21: The current humanize_date/1 only matches %DateTime{} and
will crash on %NaiveDateTime{} inputs; add a new clause like
humanize_date(%NaiveDateTime{} = ndt) that converts the naive datetime to a
DateTime (e.g., DateTime.from_naive(ndt, "Etc/UTC") or DateTime.from_naive(ndt,
:utc)), handle the {:ok, dt} and {:error, _} cases, and then delegate to the
existing humanize_date(DateTime) logic (or provide a sensible fallback) so
NaiveDateTime values are supported.
In `@feedreader/lib/feedreader_web/live/entry_live/index.ex`:
- Around line 18-26: The code assigns entries twice: a plain assign(:entries,
entries) and a LiveView stream via stream(:entries, entries, reset: true);
remove the redundant assign(:entries, entries) so only the stream is used
(locate the block that pipes socket through assign(:entries, entries) and
stream(:entries, entries, reset: true) and delete the assign call), ensuring
templates and helpers reference the streamed collection via the LiveView stream
API rather than the removed plain assign.
In `@feedreader/lib/feedreader_web/live/entry_live/index.html.heex`:
- Around line 27-36: Extract the metadata rendering into a view helper (e.g.,
entry_meta/1 or entry_meta(feed, published_at)) so the template no longer
contains the inline assignments and cond; move the logic that calls
feed_display_name(entry.feed) and humanize_date(entry.published_at) into that
helper, have it return the final string or safe HTML (handling the three cases:
both present, only feed_name, only date_str), and replace the cond block in the
template with a single call to the helper (e.g., <%= entry_meta(entry) %>) so
the logic is testable in isolation.
- Line 16: The template uses a defensive nil-check "(`@streams` &&
`@streams.entries`) || []" which masks why `@streams` isn't set; remove that
fallback and ensure `@streams` is assigned in mount/3 (or the relevant
LiveView/LiveComponent init) so the template can simply iterate over
`@streams.entries`. Locate the LiveView mount/3 (or the call that assigns
:streams) and initialize assigns[:streams] to a structure with an entries
enumerable (e.g., %{entries: []} or the actual stream source) before rendering
so the template loop can safely use `@streams.entries` directly.
- Around line 14-77: The Load More control (div id="load-more-container" and
button id="load-more-btn" which triggers the "load_more" phx event) is currently
inside the stream container (`#entries` with phx-update="stream") and can be
removed/duplicated by LiveView stream updates; move the entire
load-more-container block out of the div that has phx-update="stream" so it sits
after the closing tag for the `#entries` stream container, ensuring the stream
only contains stream items (the :for block producing {id, entry}) and the
load-more button remains a static control outside the stream.
In `@feedreader/lib/feedreader/core.ex`:
- Around line 30-39: The code currently discards the error reason returned by
add_feed; update the Enum.map block so the error branch preserves the original
reason (e.g., change the clause in the map from {:error, _} -> {:error,
feed_attrs} to {:error, reason} -> {:error, {feed_attrs, reason}}) so callers of
the function (and any OPML import flow) receive detailed failure info;
optionally also add a process_logger/error_log call inside the {:error, reason}
clause if you want immediate logging of the failure.
- Around line 4-12: Move the `import SweetXml` out of the import_opml/1 function
to the top of the module (module-level import) and simplify the defaulting logic
inside import_opml: remove the redundant `|| "Imported"` after `xpath(category,
~x"@text"s)` because the `s` modifier returns an empty string, and keep the
existing empty-string check `if category_name != "", do: category_name, else:
"Imported"` to supply "Imported". Update references in import_opml (variables
`categories`, `category_name`, and the `xpath` calls) accordingly.
In `@feedreader/lib/feedreader/workers/fetch_feed.ex`:
- Around line 49-51: Move the import of SweetXml out of the parse_feed/1
function to the module level: add import SweetXml at the top of the module (so
functions like parse_feed/1 can use SweetXml helpers without importing inside
the function) and remove the inline import SweetXml from inside the
parse_feed(body) function; ensure no other local imports collide when you place
it at module scope.
In `@feedreader/lib/feedreader/workers/scheduler.ex`:
- Around line 26-28: Replace the panic-on-failure Oban.insert! call inside the
scheduling loop with a non-raising Oban.insert call and handle both outcomes so
one failing insert doesn't stop scheduling remaining feeds: call
FeedReader.Workers.FetchFeed.new(schedule_in: delay_minutes * 60) as before but
use Oban.insert(...) and pattern-match the result {:ok, job} to proceed and
{:error, changeset_or_error} to Logger.error with context (include feed.id and
the error/changeset) so failures are logged but the loop continues; update the
code around the Oban.insert! invocation in the scheduler function to implement
this handling.
- Around line 26-28: The scheduler can enqueue duplicate FetchFeed jobs; update
the job creation in FeedReader.Workers.FetchFeed.new used here to include Oban
uniqueness options so jobs are deduplicated by feed_id (e.g., add a unique
option keyed on the feed identifier and a reasonable period), then call
Oban.insert! as before; specifically modify the call that builds the job
(FeedReader.Workers.FetchFeed.new(schedule_in: ...)) to include Oban's unique
configuration (keys/period) so duplicate fetches for the same feed are
prevented.
In `@feedreader/mise.toml`:
- Around line 41-46: The file mise.toml is missing a trailing newline at EOF
which breaks POSIX conventions; open the file (look for the depends = [...]
array block) and ensure the file ends with a single newline character by adding
a newline after the closing bracket so the EOF is terminated properly.
- Around line 1-3: The current mise.toml uses floating versions ("latest") for
the tools keys elixir and erlang which breaks reproducibility; update the elixir
and erlang entries in mise.toml to pinned semver strings (for example change
elixir = "latest" to a specific version like elixir = "1.17.0" and erlang =
"27.0") so all developers and CI use the same toolchain; pick and verify the
exact versions your project supports and replace the "latest" values
accordingly.
In `@feedreader/priv/repo/migrations/20260324122843_add_core_resources.exs`:
- Around line 42-56: The down/0 migration contains an unconditional raise which
halts execution and makes the subsequent drop/drop_if_exists lines (e.g., drop
table(:entries), drop_if_exists unique_index(:feeds, ...), drop table(:feeds))
unreachable; to fix, either remove those dead statements so down/0 only raises
with the explanatory message about entries_feed_id_fkey, or replace the raise
call with IO.warn/2 (preserving the same message) so the migration emits a
warning and then continues to execute the remaining cleanup statements in
down/0.
In `@feedreader/priv/repo/migrations/20260324172014_add_created_at.exs`:
- Around line 8-42: Wrap the whole migration body in an explicit transaction
inside def up and make destructive DDL idempotent: replace the bare "DROP TABLE
entries" execute call with "DROP TABLE IF EXISTS entries" and ensure the
ALTER/INSERT steps only run when the original entries table exists (e.g., check
sqlite_master or guard before the INSERT INTO entries_new ... SELECT ... FROM
entries); keep CREATE TABLE IF NOT EXISTS for entries_new and the CREATE INDEX
calls as-is, and commit/rollback the transaction around these execute(...)
blocks to ensure atomicity if any step fails.
In `@feedreader/priv/resource_snapshots/repo/entries/20260324171514_dev.json`:
- Around line 97-113: The foreign key "entries_feed_id_fkey" for the "source":
"feed_id" currently has "on_delete": null; update the Ash resource relationship
that defines the feed_id association (the relationship referencing feeds via
destination_attribute "id" or the relationship named feed/feed_id in the Entries
resource) to explicitly set an on_delete behavior (e.g., on_delete: :delete or
on_delete: :restrict) so the generated DB migration/metadata includes a non-null
on_delete value and prevents orphaned entries.
In `@feedreader/test/feedreader_web/live/entry_live_test.exs`:
- Around line 105-126: The test currently only ensures the load_more event is
handled; after calling render_click(view, "load_more") capture the updated
rendered HTML (via render(view) or the return value of render_click) and assert
additional entries are present by either (a) comparing the count of entry
elements before and after the click (e.g., count occurrences of the entry
selector used in your app such as ".entry" or the exact entry markup) or (b)
asserting for the presence of a later entry title like "Entry 60" that was
previously not visible; use the existing test helpers (has_element?/render/view)
and the same setup that uses Core.upsert_from_feed! to validate that entries
increased after invoking render_click(view, "load_more").
In `@feedreader/test/feedreader/core/entry_test.exs`:
- Around line 46-62: Add a complementary test that verifies Core.toggle_read/1
flips is_read from true to false: create or upsert an entry with is_read: true
(using Core.upsert_from_feed! as in the existing test), call
Core.toggle_read(entry), assert the result is {:ok, updated} and assert
updated.is_read == false; place this alongside the existing "toggles is_read
from false to true" test in the same describe block to mirror the toggle_starred
coverage.
- Around line 105-134: The test creates two entries but toggles the "read" entry
back to unread, so it doesn't verify filtering; update the test in
entry_test.exs so one entry remains read and one unread (use
Core.upsert_from_feed! to create an entry with is_read: true and do not call
Core.toggle_read on that entry), then call Core.list_unread() and assert that
only the unread entry is returned (e.g., length(results.results) == 1 and that
the returned IDs/titles do not include the originally read entry) to confirm
read entries are excluded.
In `@feedreader/test/support/conn_case.ex`:
- Around line 13-15: Update the module documentation in FeedreaderWeb.ConnCase
(the `@moduledoc` text) to reference SQLite instead of PostgreSQL and adjust the
guidance about async tests accordingly (e.g., "Because this project uses SQLite,
running database tests asynchronously via `use FeedreaderWeb.ConnCase, async:
true` is not recommended"). Locate the `@moduledoc` in conn_case.ex and replace
the PostgreSQL-specific sentence with one that accurately reflects the project's
SQLite migrations and async-testing recommendation.
In `@mise.toml`:
- Around line 1-3: Replace the non-deterministic 'latest' tool pins in mise.toml
by specifying concrete version strings for the Erlang and Elixir keys (replace
erlang = 'latest' and elixir = 'latest' with pinned versions like erlang =
'27.0' and elixir = '1.17.0' or your chosen stable releases) so CI and developer
machines resolve identical runtimes; update the erlang and elixir entries to the
exact version numbers you want to standardize on.
- Add Oban.Plugins.Lifeline with 5-minute rescue_after to recover orphaned executing jobs on server restart - Fix RSS link parsing: || treated "" as truthy, preventing fallback to link/text() for RSS feeds - Add Entry.update action for is_read/is_starred, fixing tests that passed these to upsert_from_feed which doesn't accept them - Add @moduledoc to modules flagged by Credo - Set exit_status: 0 on CyclomaticComplexity and Nesting Credo checks
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (3)
feedreader/lib/feedreader_web/components/layouts.ex (1)
47-66: Navigation active state uses exact path matching.The current implementation uses exact string comparison (e.g.,
@current_path == "/feeds"). This won't highlight the nav item for sub-paths like/feeds/123or/feeds/new. If hierarchical highlighting is desired, consider usingString.starts_with?/2.<li> - <.link navigate="/feeds" class={if `@current_path` == "/feeds", do: "active"}> + <.link navigate="/feeds" class={if String.starts_with?(`@current_path`, "/feeds"), do: "active"}> Feeds </.link> </li>This is fine to leave as-is if exact matching is the intended behavior.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/lib/feedreader_web/components/layouts.ex` around lines 47 - 66, The nav items currently use exact comparisons against `@current_path` (e.g., `@current_path` == "/feeds"), so subpaths like "/feeds/123" won't set the active class; update the class bindings on the .link elements to use String.starts_with?/2 for hierarchical routes (e.g., replace `@current_path` == "/feeds" with String.starts_with?(`@current_path`, "/feeds")) while keeping the Unread root check exact (keep `@current_path` == "/") so "/" does not match other paths; modify the class attributes for the .link elements referencing "/starred", "/history", and "/feeds" accordingly.feedreader/priv/repo/migrations/20260324122843_add_core_resources.exs (1)
23-26: Consider addingON DELETE CASCADEto the foreign key.While the Ash resource defines
cascade_destroy(:entries)at the application level, adding database-level cascading provides a safety net if entries are ever orphaned (e.g., direct SQL operations, failed transactions).♻️ Proposed fix
- add(:feed_id, references(:feeds, column: :id, name: "entries_feed_id_fkey", type: :uuid), + add(:feed_id, references(:feeds, column: :id, name: "entries_feed_id_fkey", type: :uuid, on_delete: :delete_all), null: false )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/priv/repo/migrations/20260324122843_add_core_resources.exs` around lines 23 - 26, Update the entries table foreign key definition so the references call for :feed_id includes database-level cascading by adding on_delete: :delete_all (i.e., enable ON DELETE CASCADE) to the existing add(:feed_id, references(:feeds, column: :id, name: "entries_feed_id_fkey", type: :uuid), null: false) declaration; this ensures the DB will delete entries when a feed is removed in addition to the existing Ash cascade_destroy(:entries).feedreader/test/feedreader_web/live/entry_live_test.exs (1)
127-130: Assertion logic is correct but could be clearer.The
||works because=~returns truthy (matched substring) ornil. Consider making the intent explicit:✨ Proposed clarity improvement
- assert updated_html =~ "Entry 51" || updated_html =~ "Entry 60" + assert updated_html =~ "Entry 51" or updated_html =~ "Entry 60"Or use
Enum.any?/2:assert Enum.any?(["Entry 51", "Entry 60"], &(updated_html =~ &1))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/test/feedreader_web/live/entry_live_test.exs` around lines 127 - 130, The assertion using `assert updated_html =~ "Entry 51" || updated_html =~ "Entry 60"` is relying on truthy/nil semantics; update the test in the block that calls `render(view)` and assigns `updated_html` to perform an explicit boolean check—e.g., replace the `||` expression with a clear intent such as `assert Enum.any?(["Entry 51", "Entry 60"], fn s -> updated_html =~ s end)` (or use explicit `!= nil` checks) so the assertion reads unambiguously and references `updated_html` and the `render(view)` result.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@feedreader/AGENTS.md`:
- Line 364: The sentence in the testing guidance contains a typo: change "Never
tests again raw HTML" to "Never test against raw HTML" so the recommendation
reads correctly and matches the referenced helpers (element/2, has_element/2,
has_element?/2) — update the phrasing around `element/2` and `has_element/2`
(and the example `assert has_element?(view, "#my-form")`) to use "test against"
instead of "tests again".
- Around line 18-19: The HEEx example contains malformed syntax: `(<.input
class="myclass px-2 py-1 rounded-lg">)` should be a valid self-closing
component; replace that snippet with a correct HEEx self-closing tag `<.input
class="myclass px-2 py-1 rounded-lg" />` so the example is syntactically valid
and won't break rendering.
In `@feedreader/assets/js/app.js`:
- Line 25: The import line `import {hooks as colocatedHooks} from
"phoenix-colocated/feedreader"` references a package that doesn't exist in the
repo or mix.exs; either remove this import and implement the needed hooks
locally (replace usages of `colocatedHooks` with your locally defined `hooks`
object in feedreader/assets/js/app.js) or add and install the
`phoenix-colocated` dependency (add it to assets/package.json and run npm/yarn
install and, if required, add the Elixir package to mix.exs), then ensure the
import path matches the installed package entry point.
In `@feedreader/lib/feedreader/accounts/user.ex`:
- Around line 20-22: Create the missing Feedreader.Accounts.Emails module and
implement a deliver_magic_link function (accepting the same arity used by the
sender callback: deliver_magic_link(user_or_email, token) or
deliver_magic_link(user_or_email, token, opts)) so the sender in the magic_link
strategy can call it without crashing; implement the function to build and send
the email (or delegate to your mailer) and return a success/error tuple
consistent with the rest of the app.
In `@feedreader/lib/feedreader/core/feed.ex`:
- Around line 24-29: The call set_attribute(:last_fetched_at,
DateTime.utc_now()) is evaluating DateTime.utc_now() at compile time; change it
to pass a zero-arity function so the timestamp is produced at runtime (e.g.,
replace the direct DateTime.utc_now() value with a zero-arity function
reference) in the update :log_fetch_success block so
set_attribute(:last_fetched_at, ...) is invoked when the action runs.
In `@feedreader/lib/feedreader/workers/scheduler.ex`:
- Around line 28-31: The scheduler can enqueue duplicate FetchFeed jobs for the
same feed; update the FetchFeed worker module (FeedReader.Workers.FetchFeed) to
declare an Oban uniqueness constraint that prevents multiple pending/executing
jobs for the same feed for the fetch interval duration (10 minutes). Add an Oban
unique config on the worker (set period to `@fetch_interval_minutes` * 60 or 600
seconds) and scope the uniqueness to the feed identifier in the job args (so
uniqueness keys/fields reference the feed_id in args); if your Oban version
requires module attributes (e.g., `@unique`) or worker options, add it there so
the existing Scheduler call (FeedReader.Workers.FetchFeed.new(...)) will fail to
enqueue duplicate jobs within the 10-minute window.
---
Nitpick comments:
In `@feedreader/lib/feedreader_web/components/layouts.ex`:
- Around line 47-66: The nav items currently use exact comparisons against
`@current_path` (e.g., `@current_path` == "/feeds"), so subpaths like "/feeds/123"
won't set the active class; update the class bindings on the .link elements to
use String.starts_with?/2 for hierarchical routes (e.g., replace `@current_path`
== "/feeds" with String.starts_with?(`@current_path`, "/feeds")) while keeping the
Unread root check exact (keep `@current_path` == "/") so "/" does not match other
paths; modify the class attributes for the .link elements referencing
"/starred", "/history", and "/feeds" accordingly.
In `@feedreader/priv/repo/migrations/20260324122843_add_core_resources.exs`:
- Around line 23-26: Update the entries table foreign key definition so the
references call for :feed_id includes database-level cascading by adding
on_delete: :delete_all (i.e., enable ON DELETE CASCADE) to the existing
add(:feed_id, references(:feeds, column: :id, name: "entries_feed_id_fkey",
type: :uuid), null: false) declaration; this ensures the DB will delete entries
when a feed is removed in addition to the existing Ash
cascade_destroy(:entries).
In `@feedreader/test/feedreader_web/live/entry_live_test.exs`:
- Around line 127-130: The assertion using `assert updated_html =~ "Entry 51" ||
updated_html =~ "Entry 60"` is relying on truthy/nil semantics; update the test
in the block that calls `render(view)` and assigns `updated_html` to perform an
explicit boolean check—e.g., replace the `||` expression with a clear intent
such as `assert Enum.any?(["Entry 51", "Entry 60"], fn s -> updated_html =~ s
end)` (or use explicit `!= nil` checks) so the assertion reads unambiguously and
references `updated_html` and the `render(view)` result.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3d082846-52ea-4cbe-9600-b71bdd0fddcf
📒 Files selected for processing (25)
AGENTS.mdfeedreader/.credo.exsfeedreader/AGENTS.mdfeedreader/assets/js/app.jsfeedreader/config/config.exsfeedreader/config/runtime.exsfeedreader/lib/feedreader/accounts/user.exfeedreader/lib/feedreader/core.exfeedreader/lib/feedreader/core/entry.exfeedreader/lib/feedreader/core/feed.exfeedreader/lib/feedreader/workers/fetch_feed.exfeedreader/lib/feedreader/workers/scheduler.exfeedreader/lib/feedreader_web/components/layouts.exfeedreader/lib/feedreader_web/components/time_helpers.exfeedreader/lib/feedreader_web/live/entry_live/index.exfeedreader/lib/feedreader_web/live/entry_live/index.html.heexfeedreader/lib/feedreader_web/live/feed_live/index.exfeedreader/lib/feedreader_web/live/feed_live/index.html.heexfeedreader/mise.tomlfeedreader/priv/repo/migrations/20260324122843_add_core_resources.exsfeedreader/priv/repo/migrations/20260324172014_add_created_at.exsfeedreader/test/feedreader/core/entry_test.exsfeedreader/test/feedreader_web/components/time_helpers_test.exsfeedreader/test/feedreader_web/live/entry_live_test.exsfeedreader/test/support/conn_case.ex
✅ Files skipped from review due to trivial changes (8)
- AGENTS.md
- feedreader/test/feedreader_web/components/time_helpers_test.exs
- feedreader/lib/feedreader_web/live/entry_live/index.html.heex
- feedreader/lib/feedreader_web/live/feed_live/index.html.heex
- feedreader/config/config.exs
- feedreader/test/feedreader/core/entry_test.exs
- feedreader/mise.toml
- feedreader/.credo.exs
🚧 Files skipped from review as they are similar to previous changes (7)
- feedreader/test/support/conn_case.ex
- feedreader/priv/repo/migrations/20260324172014_add_created_at.exs
- feedreader/config/runtime.exs
- feedreader/lib/feedreader_web/components/time_helpers.ex
- feedreader/lib/feedreader/workers/fetch_feed.ex
- feedreader/lib/feedreader_web/live/feed_live/index.ex
- feedreader/lib/feedreader/core/entry.ex
📜 Review details
🧰 Additional context used
🪛 LanguageTool
feedreader/AGENTS.md
[style] ~5-~5: Consider using a different verb for a more formal wording.
Context: ... when you are done with all changes and fix any pending issues - Use the already in...
(FIX_RESOLVE)
[style] ~15-~15: This phrase is redundant. Consider using “outside”.
Context: ...rbidden** from calling <.flash_group> outside of the layouts.ex module - Out of the bo...
(OUTSIDE_OF)
[grammar] ~24-~24: Ensure spelling is correct
Context: ...ve, and visually stunning interfaces. - Tailwindcss v4 **no longer needs a tailwind.config....
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
[grammar] ~52-~52: Use a hyphen to join words.
Context: ...s - Elixir lists do not support index based access via the access syntax ...
(QB_NEW_EN_HYPHEN)
[grammar] ~60-~60: Use a hyphen to join words.
Context: ..., pattern matching, or List` for index based list access, ie: i = 0 ...
(QB_NEW_EN_HYPHEN)
[style] ~66-~66: In American English, abbreviations like “etc.” require a period.
Context: ... expressions like if, case, cond, etc you must bind the result of the exp...
(ETC_PERIOD)
[grammar] ~81-~81: Use a hyphen to join words.
Context: ... such as my_struct.field or use higher level APIs that are available on the str...
(QB_NEW_EN_HYPHEN)
[grammar] ~85-~85: Ensure spelling is correct
Context: ...hould be reserved for guards - Elixir's builtin OTP primitives like DynamicSupervisor...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
[style] ~92-~92: ‘almost never’ might be wordy. Consider a shorter alternative.
Context: ...--failed-mix deps.clean --all` is almost never needed. Avoid using it unless you...
(EN_WORDINESS_PREMIUM_ALMOST_NEVER)
[style] ~131-~131: In American English, abbreviations like “etc.” require a period.
Context: ...s to key elements (like forms, buttons, etc) when writing templates, these IDs can ...
(ETC_PERIOD)
[style] ~155-~155: You have already used this phrasing in nearby sentences. Consider replacing it to add variety to your writing.
Context: ...literal curly's like { or }. If you want to show a textual code snippet on the page...
(REP_WANT_TO_VB)
[style] ~363-~363: In American English, abbreviations like “etc.” require a period.
Context: ...element/2, has_element/2`, selectors, etc - Never tests again raw HTML, **alw...
(ETC_PERIOD)
🔇 Additional comments (16)
feedreader/lib/feedreader_web/components/layouts.ex (3)
129-162: Accessibility labels are now present - previous concern addressed.The theme toggle buttons now include appropriate
aria-labelattributes ("Set theme to system", "Set theme to light", "Set theme to dark"), making them accessible to screen reader users.
1-12: LGTM!Standard Phoenix layout module setup with proper use of
FeedreaderWeb, :htmland template embedding.
91-122: LGTM!The flash group implementation follows Phoenix conventions with proper accessibility (
aria-live="polite") and standard LiveView disconnection handling. The component correctly passeskindandflashattributes matching theflash/1component's signature.feedreader/assets/js/app.js (3)
28-34: LGTM!The CSRF token extraction with null fallback is appropriately defensive, and the LiveSocket configuration follows Phoenix conventions. The
longPollFallbackMs: 2500provides graceful degradation for WebSocket connection issues.
36-48: LGTM!Standard Phoenix LiveView setup for progress bar indication and socket connection. The window exposure with documented debug commands is helpful for development troubleshooting.
56-83: LGTM!The development-only features are correctly gated behind the environment check (esbuild replaces
process.env.NODE_ENVat build time). The key tracking for editor shortcuts and capture-phase click listener are appropriate for this dev tooling use case.feedreader/lib/feedreader_web/live/entry_live/index.ex (3)
117-136:has_moremay become stale after toggling entries off the list.When an entry is removed from the current view (e.g., marking as read on the unread page),
entry_countdecreases buthas_moreisn't recalculated. Users might miss entries ifhas_moreshould have becometrueafter removal creates room.This is a minor UX edge case since
load_morewill correctly refreshhas_morefrom the server.
66-82: LGTM - root_domain logic is now correct.The destructuring
[tld, second | _] -> "#{second}.#{tld}"properly extracts the domain from the reversed parts list (e.g.,["com", "example", "www"]→"example.com").
138-155: LGTM!The
load_moreimplementation correctly increments the offset, fetches the next page, and appends entries to the stream withat: -1.feedreader/lib/feedreader/core/feed.ex (1)
36-68: LGTM!The attribute definitions, relationships, and identity constraint are well-structured. The
cascade_destroy(:entries)ensures referential integrity when feeds are deleted.feedreader/lib/feedreader/workers/scheduler.ex (1)
1-44: LGTM on the overall structure.The scheduler logic is sound: fetching feeds list, checking the interval, staggering jobs, and logging errors. The comment on line 24 correctly states "1-5 minutes" matching
:rand.uniform(5)behavior.feedreader/priv/repo/migrations/20260324122843_add_core_resources.exs (1)
45-63: LGTM!The down migration handles cleanup correctly and includes a helpful warning about SQLite's foreign key constraint limitations.
feedreader/test/feedreader_web/live/entry_live_test.exs (2)
1-83: LGTM!Good test coverage for the LiveView including index display, toggle interactions, page titles for different routes, and DOM element presence checks.
85-166: LGTM!Pagination tests properly verify the "Load More" functionality and filter behavior. The setup with 60 entries correctly tests the 50-item page boundary.
feedreader/lib/feedreader/core.ex (2)
42-63: LGTM!The Ash domain resource definitions correctly expose the necessary actions for feeds and entries, aligning with the underlying resource action names.
7-40: Root-level feeds (not nested in categories) are not supported by the current implementation.The XPath
//body/outlineextracts only top-level category outlines, and./outline[@xmlUrl]gets their children. OPML files with feeds directly under<body>(not grouped in category outlines) will be skipped.The test fixture and test cases expect an OPML structure with categorized feeds (e.g., Miniflux export format), suggesting this behavior may be intentional. If support for flat or mixed OPML structures is desired in the future, the implementation would need to be extended to extract and handle root-level feeds separately.
| result = | ||
| %{feed_id: feed.id} | ||
| |> FeedReader.Workers.FetchFeed.new(schedule_in: delay_minutes * 60) | ||
| |> Oban.insert() |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if FetchFeed worker has unique constraint configured
ast-grep --pattern $'use Oban.Worker,
$$$
unique: $_
$$$'Repository: kasuboski/feedreader
Length of output: 46
🏁 Script executed:
#!/bin/bash
# Find the FetchFeed worker file
fd -t f "FetchFeed" feedreader/Repository: kasuboski/feedreader
Length of output: 46
🏁 Script executed:
#!/bin/bash
# Find Scheduler worker file
fd -t f "Scheduler" feedreader/ -o fd -t f "scheduler" feedreader/Repository: kasuboski/feedreader
Length of output: 187
🏁 Script executed:
#!/bin/bash
# Check the Scheduler worker implementation
cat -n feedreader/lib/feedreader/workers/scheduler.exRepository: kasuboski/feedreader
Length of output: 1518
🏁 Script executed:
#!/bin/bash
# Find FetchFeed worker
find feedreader -type f -name "*fetch*" -o -name "*Fetch*" | grep -i workerRepository: kasuboski/feedreader
Length of output: 166
🏁 Script executed:
#!/bin/bash
# Search for FetchFeed module definition
rg "defmodule.*FetchFeed" feedreader/Repository: kasuboski/feedreader
Length of output: 254
🏁 Script executed:
#!/bin/bash
# Search for when last_fetched_at is updated
rg "last_fetched_at" feedreader/ -A 2 -B 2Repository: kasuboski/feedreader
Length of output: 4170
🏁 Script executed:
#!/bin/bash
# Search for Oban config to see Scheduler cron schedule
rg "Oban" feedreader/lib/feedreader/application.ex feedreader/config/ -A 5 -B 2 2>/dev/null || rg "Oban" feedreader/ -l | head -5Repository: kasuboski/feedreader
Length of output: 2510
🏁 Script executed:
#!/bin/bash
# Read FetchFeed worker to check for unique constraints and when log_fetch_success is called
cat -n feedreader/lib/feedreader/workers/fetch_feed.exRepository: kasuboski/feedreader
Length of output: 6976
Add Oban unique constraint to prevent duplicate fetch jobs for slow-running feeds.
Per the code, log_fetch_success (which updates last_fetched_at) is called after all entries are processed (line 30 of FetchFeed). The Scheduler runs every 3 minutes and checks a 10-minute fetch interval window. If a feed takes longer than 3 minutes to fetch—realistic given HTTP requests and XML parsing—the next Scheduler run will see stale last_fetched_at and enqueue a duplicate job.
Add Oban's unique constraint to the FetchFeed worker to prevent concurrent duplicate jobs for the same feed:
Proposed fix
defmodule FeedReader.Workers.FetchFeed do
`@moduledoc` "Oban worker that fetches and parses a single RSS/Atom feed."
- use Oban.Worker, queue: :default
+ use Oban.Worker,
+ queue: :default,
+ unique: [period: 600, keys: [:feed_id]]This ensures only one fetch job per feed can be pending or executing within a 10-minute window, aligning with the @fetch_interval_minutes value.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@feedreader/lib/feedreader/workers/scheduler.ex` around lines 28 - 31, The
scheduler can enqueue duplicate FetchFeed jobs for the same feed; update the
FetchFeed worker module (FeedReader.Workers.FetchFeed) to declare an Oban
uniqueness constraint that prevents multiple pending/executing jobs for the same
feed for the fetch interval duration (10 minutes). Add an Oban unique config on
the worker (set period to `@fetch_interval_minutes` * 60 or 600 seconds) and scope
the uniqueness to the feed identifier in the job args (so uniqueness keys/fields
reference the feed_id in args); if your Oban version requires module attributes
(e.g., `@unique`) or worker options, add it there so the existing Scheduler call
(FeedReader.Workers.FetchFeed.new(...)) will fail to enqueue duplicate jobs
within the 10-minute window.
Fix compile-time DateTime bug in set_attribute, add Oban unique constraint to prevent duplicate fetch jobs, create missing Emails module for magic link auth, remove dead phoenix-colocated import, and fix AGENTS.md typos.
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (2)
feedreader/lib/feedreader/workers/fetch_feed.ex (2)
161-165:⚠️ Potential issue | 🟠 MajorSubtract the parsed RFC822 offset when normalizing to UTC.
DateTime.new/2defaults toEtc/UTC, and Elixir's own ISO8601 examples show that a timestamp with+02:30is converted to a UTC time 2h30 earlier. Here the local wall-clock time is already parsed, so adding+0500moves it the wrong direction; the offset needs to be subtracted when building the UTCpublished_at. (hexdocs.pm)💡 Suggested fix
- DateTime.add(datetime, offset, :minute) + DateTime.add(datetime, -offset, :minute)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/lib/feedreader/workers/fetch_feed.ex` around lines 161 - 165, The code currently adds the parsed RFC822 timezone offset (from tz_string via parse_tz_offset) to the DateTime produced by DateTime.new, which moves UTC the wrong direction; instead subtract the parsed offset when normalizing to UTC (i.e. use DateTime.add with the negative of offset or otherwise subtract minutes) so the resulting published_at is UTC-correct; update the logic around DateTime.new(date, time_struct), tz_string / parse_tz_offset, and the DateTime.add call that computes published_at accordingly.
56-59:⚠️ Potential issue | 🟠 MajorParse the feed once with secure, namespace-aware options.
Line 56 is querying untrusted XML directly. SweetXml recommends a separate
parse/2step withdtd: :nonefor untrusted documents, and its namespace support docs requirenamespace_conformant: trueto match namespaced nodes independent of the source prefix. Without that, common Atom feeds in the default Atom namespace won't match//entryhere or the later./title/./link/./publishedselectors. (hexdocs.pm)Also applies to: 65-85
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/lib/feedreader/workers/fetch_feed.ex` around lines 56 - 59, The XML is being queried directly with xmap (items: ~x"//item"l, entries: ~x"//entry"l) which risks DTD attacks and fails to match namespaced Atom nodes; change the code to first call SweetXml.parse/2 with secure, namespace-aware options (include dtd: :none and namespace_conformant: true), then run xmap on the parsed document so selectors like "//entry" and child selectors ("./title", "./link", "./published") will match namespaced feeds and untrusted input is parsed safely.
🧹 Nitpick comments (4)
feedreader/lib/feedreader/accounts/emails.ex (2)
12-12: Consider sending a clickable magic-link URL instead of raw token textOn Line 12, sending only the token adds friction and invites copy/paste errors. For magic-link UX, include a full URL (and optionally expiry note) in the email body.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/lib/feedreader/accounts/emails.ex` at line 12, Replace the plain token text_body call so the email contains a full clickable magic-link URL and an expiry note: construct a URL from the token (use your app host from config) and set both html_body (with an <a> anchor pointing to the URL) and a text_body fallback that includes the full URL and expiry information; update the location where text_body("Your magic link token is: #{token}") is called to instead reference the token-derived URL and include both html_body and text_body for proper clickable UX.
10-10: Move sender address to runtime configOn Line 10, hardcoding
"noreply@feedreader.local"makes non-local environments brittle. Pull sender name/address from config (config/runtime.exs) so staging/prod can use verified domains.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/lib/feedreader/accounts/emails.ex` at line 10, Replace the hardcoded sender in the from/2 call with a runtime-configured value: read the sender name/address from application config (e.g. a :email_from or :sender tuple set in config/runtime.exs) using Application.fetch_env!/2 (or Application.get_env/3) and pass that into the existing from/2 call in accounts.emails.ex; update config/runtime.exs to provide the verified domain for staging/prod and keep the existing from({"Feedreader", "noreply@feedreader.local"}) usage shape by supplying the tuple from the config.feedreader/AGENTS.md (1)
11-132: Replace template placeholders with project-specific module names.Several examples still use
MyAppWeb/AppWeb/my_app_web/my_app(e.g., Line 11, Line 28, Line 112, Line 131). In this repo, that can lead to copy-paste errors. PreferFeedreaderWeb/feedreader_web, or explicitly label these as placeholders.📝 Example edits
-- The `MyAppWeb.Layouts` module is aliased in the `my_app_web.ex` file, so you can use it without needing to alias it again +- The `FeedreaderWeb.Layouts` module is aliased in `feedreader_web.ex`, so you can use it without needing to alias it again - `@source` "../../lib/my_app_web"; + `@source` "../../lib/feedreader_web"; - scope "/admin", AppWeb.Admin do + scope "/admin", FeedreaderWeb.Admin do -- For "app wide" template imports, you can import/alias into the `my_app_web.ex`'s `html_helpers` block, so they will be available to all LiveViews, LiveComponent's, and all modules that do `use MyAppWeb, :html` (replace "my_app" by the actual app name) +- For app-wide template imports, add imports/aliases in `feedreader_web.ex`'s `html_helpers` block, so they are available to all LiveViews, LiveComponents, and all modules that do `use FeedreaderWeb, :html`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/AGENTS.md` around lines 11 - 132, The docs contain generic placeholders (MyAppWeb, AppWeb, my_app_web, my_app) that must be replaced with this project's real names (FeedreaderWeb / feedreader_web) or explicitly marked as placeholders; update occurrences such as MyAppWeb.Layouts, Layouts.app, core_components.ex references, and any example route scopes so they use FeedreaderWeb/feedreader_web (or add a clear "replace with your app" note) to avoid copy-paste errors—search the file for MyAppWeb, AppWeb, my_app_web, my_app and replace or annotate them consistently, ensuring examples like scope blocks, component imports (<.icon>, <.input>, <.flash_group>) and mentions of layouts refer to FeedreaderWeb where appropriate.feedreader/assets/js/app.js (1)
56-79: Make dev editor listeners idempotent to avoid duplicate handlers.Line 56-79 currently registers global listeners inside the attach callback; repeated attach events can stack handlers and cause duplicated click/key behavior.
♻️ Suggested refactor
if (process.env.NODE_ENV === "development") { + let keyDown = null + let editorBindingsInstalled = false + window.addEventListener("phx:live_reload:attached", ({detail: reloader}) => { // Enable server log streaming to client. // Disable with reloader.disableServerLogs() reloader.enableServerLogs() + window.liveReloader = reloader // Open configured PLUG_EDITOR at file:line of the clicked element's HEEx component @@ - let keyDown - window.addEventListener("keydown", e => keyDown = e.key) - window.addEventListener("keyup", _e => keyDown = null) - window.addEventListener("click", e => { - if(keyDown === "c"){ - e.preventDefault() - e.stopImmediatePropagation() - reloader.openEditorAtCaller(e.target) - } else if(keyDown === "d"){ - e.preventDefault() - e.stopImmediatePropagation() - reloader.openEditorAtDef(e.target) - } - }, true) - - window.liveReloader = reloader + if (!editorBindingsInstalled) { + editorBindingsInstalled = true + window.addEventListener("keydown", e => (keyDown = e.key)) + window.addEventListener("keyup", () => (keyDown = null)) + window.addEventListener("click", e => { + if (keyDown === "c") { + e.preventDefault() + e.stopImmediatePropagation() + window.liveReloader?.openEditorAtCaller(e.target) + } else if (keyDown === "d") { + e.preventDefault() + e.stopImmediatePropagation() + window.liveReloader?.openEditorAtDef(e.target) + } + }, true) + } }) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/assets/js/app.js` around lines 56 - 79, The attach callback currently re-registers global handlers on every "phx:live_reload:attached" event causing duplicate key/click behavior; make these listeners idempotent by either storing named handler references (e.g., keydownHandler, keyupHandler, clickHandler) and removing them with removeEventListener before re-adding, or by guarding the registration with a boolean flag (e.g., window.__devEditorListenersAttached) so the body that declares keyDown and registers keydown/keyup/click listeners runs only once; ensure the handlers still call reloader.openEditorAtCaller(e.target) and reloader.openEditorAtDef(e.target) and that event.preventDefault/stopImmediatePropagation remain in the click handler.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@feedreader/lib/feedreader/accounts/emails.ex`:
- Around line 6-10: deliver_magic_link/2 currently passes user_or_email straight
into to(), which can be a user struct or an email string; add two clause
overloads for deliver_magic_link: one that pattern-matches a User struct (e.g.,
%User{} in the function head) and extracts the email (user.email or
Map.get(user, :email)) before calling to(), and a second clause that accepts a
plain email string and uses it directly; update the implementation around to()
in the function bodies so both paths build the same email variable for the rest
of the pipeline.
In `@feedreader/lib/feedreader/workers/fetch_feed.ex`:
- Around line 17-31: The current loop treats every Core.upsert_from_feed/1
result as a creation; change the logic to detect true inserts and only
broadcast/count those: for each incoming entry, first check existence (e.g. call
a read function on Core such as a get_by external id or feed+guid lookup) and if
not found call Core.upsert_from_feed/1 (or a create variant) and collect as
inserted, otherwise call upsert/update but mark as updated; then build
new_entries only from the actually inserted results and broadcast
"entry:created" and count length(new_entries) accordingly; alternatively modify
Core.upsert_from_feed/1 to return {:ok, :inserted, record} | {:ok, :updated,
record} and use that tag to gate the broadcast and count, ensuring
Core.log_fetch_success/1 is still called once after processing.
- Around line 39-40: In fetch_feed (the private function fetch_feed/1) stop
passing unvalidated feed_url (url) directly to Req.get/2: first parse the URL
and ensure scheme is "http" or "https", reject hostnames like "localhost" or
literal loopback names, resolve the hostname to IP(s) and reject any address in
loopback, link-local, multicast, RFC1918/private ranges or the cloud metadata IP
(169.254.169.254) before fetching, and then call Req.get(url, receive_timeout:
30_000, redirect: false); only allow follow-up redirects if you re-validate each
redirect target with the same checks. Ensure these checks live next to
fetch_feed/1 so Req.get/2 is only invoked after URL and host/IP validation.
- Around line 83-85: Replace the truthy `||` fallback for published_raw with an
explicit non-empty check: instead of relying on xpath(...) || xpath(...) (which
stops at an empty string), call the three xpath expressions (the ones currently
used to assign published_raw) and pick the first value that is not an empty
string (e.g. use Enum.find/Enum.find_value to select the first s when s != ""),
then use that result for published_raw and downstream published_at logic in the
fetch_feed.ex code.
In `@feedreader/priv/repo/migrations/20260324233336_migrate_resources1_dev.exs`:
- Around line 11-13: The migration currently uses alter table(:users) do
add(:email, :text, null: false) end which will fail on existing rows; change
this to either add the column with a safe default (e.g. add(:email, :text,
default: "unknown@example.com", null: false) then remove the default if needed)
or perform a two-step migration: first add(:email, :text, null: true), run a
backfill task to populate email for existing records (or set a sensible
default), and then run a subsequent migration that alters the column to null:
false; update the migration code to use table(:users) and the add(:email, ...)
statements accordingly.
---
Duplicate comments:
In `@feedreader/lib/feedreader/workers/fetch_feed.ex`:
- Around line 161-165: The code currently adds the parsed RFC822 timezone offset
(from tz_string via parse_tz_offset) to the DateTime produced by DateTime.new,
which moves UTC the wrong direction; instead subtract the parsed offset when
normalizing to UTC (i.e. use DateTime.add with the negative of offset or
otherwise subtract minutes) so the resulting published_at is UTC-correct; update
the logic around DateTime.new(date, time_struct), tz_string / parse_tz_offset,
and the DateTime.add call that computes published_at accordingly.
- Around line 56-59: The XML is being queried directly with xmap (items:
~x"//item"l, entries: ~x"//entry"l) which risks DTD attacks and fails to match
namespaced Atom nodes; change the code to first call SweetXml.parse/2 with
secure, namespace-aware options (include dtd: :none and namespace_conformant:
true), then run xmap on the parsed document so selectors like "//entry" and
child selectors ("./title", "./link", "./published") will match namespaced feeds
and untrusted input is parsed safely.
---
Nitpick comments:
In `@feedreader/AGENTS.md`:
- Around line 11-132: The docs contain generic placeholders (MyAppWeb, AppWeb,
my_app_web, my_app) that must be replaced with this project's real names
(FeedreaderWeb / feedreader_web) or explicitly marked as placeholders; update
occurrences such as MyAppWeb.Layouts, Layouts.app, core_components.ex
references, and any example route scopes so they use
FeedreaderWeb/feedreader_web (or add a clear "replace with your app" note) to
avoid copy-paste errors—search the file for MyAppWeb, AppWeb, my_app_web, my_app
and replace or annotate them consistently, ensuring examples like scope blocks,
component imports (<.icon>, <.input>, <.flash_group>) and mentions of layouts
refer to FeedreaderWeb where appropriate.
In `@feedreader/assets/js/app.js`:
- Around line 56-79: The attach callback currently re-registers global handlers
on every "phx:live_reload:attached" event causing duplicate key/click behavior;
make these listeners idempotent by either storing named handler references
(e.g., keydownHandler, keyupHandler, clickHandler) and removing them with
removeEventListener before re-adding, or by guarding the registration with a
boolean flag (e.g., window.__devEditorListenersAttached) so the body that
declares keyDown and registers keydown/keyup/click listeners runs only once;
ensure the handlers still call reloader.openEditorAtCaller(e.target) and
reloader.openEditorAtDef(e.target) and that
event.preventDefault/stopImmediatePropagation remain in the click handler.
In `@feedreader/lib/feedreader/accounts/emails.ex`:
- Line 12: Replace the plain token text_body call so the email contains a full
clickable magic-link URL and an expiry note: construct a URL from the token (use
your app host from config) and set both html_body (with an <a> anchor pointing
to the URL) and a text_body fallback that includes the full URL and expiry
information; update the location where text_body("Your magic link token is:
#{token}") is called to instead reference the token-derived URL and include both
html_body and text_body for proper clickable UX.
- Line 10: Replace the hardcoded sender in the from/2 call with a
runtime-configured value: read the sender name/address from application config
(e.g. a :email_from or :sender tuple set in config/runtime.exs) using
Application.fetch_env!/2 (or Application.get_env/3) and pass that into the
existing from/2 call in accounts.emails.ex; update config/runtime.exs to provide
the verified domain for staging/prod and keep the existing from({"Feedreader",
"noreply@feedreader.local"}) usage shape by supplying the tuple from the config.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 02b45b30-96c3-4d66-a306-12d1567e8d98
📒 Files selected for processing (7)
feedreader/AGENTS.mdfeedreader/assets/js/app.jsfeedreader/lib/feedreader/accounts/emails.exfeedreader/lib/feedreader/core/feed.exfeedreader/lib/feedreader/workers/fetch_feed.exfeedreader/priv/repo/migrations/20260324233336_migrate_resources1_dev.exsfeedreader/priv/resource_snapshots/repo/users/20260324233337_dev.json
✅ Files skipped from review due to trivial changes (1)
- feedreader/priv/resource_snapshots/repo/users/20260324233337_dev.json
📜 Review details
🧰 Additional context used
🪛 LanguageTool
feedreader/AGENTS.md
[style] ~5-~5: Consider using a different verb for a more formal wording.
Context: ... when you are done with all changes and fix any pending issues - Use the already in...
(FIX_RESOLVE)
[style] ~15-~15: This phrase is redundant. Consider using “outside”.
Context: ...rbidden** from calling <.flash_group> outside of the layouts.ex module - Out of the bo...
(OUTSIDE_OF)
[grammar] ~23-~23: Ensure spelling is correct
Context: ...ve, and visually stunning interfaces. - Tailwindcss v4 **no longer needs a tailwind.config....
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
[grammar] ~51-~51: Use a hyphen to join words.
Context: ...s - Elixir lists do not support index based access via the access syntax ...
(QB_NEW_EN_HYPHEN)
[grammar] ~59-~59: Use a hyphen to join words.
Context: ..., pattern matching, or List` for index based list access, ie: i = 0 ...
(QB_NEW_EN_HYPHEN)
[style] ~65-~65: In American English, abbreviations like “etc.” require a period.
Context: ... expressions like if, case, cond, etc you must bind the result of the exp...
(ETC_PERIOD)
[grammar] ~80-~80: Use a hyphen to join words.
Context: ... such as my_struct.field or use higher level APIs that are available on the str...
(QB_NEW_EN_HYPHEN)
[grammar] ~84-~84: Ensure spelling is correct
Context: ...hould be reserved for guards - Elixir's builtin OTP primitives like DynamicSupervisor...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
[style] ~91-~91: ‘almost never’ might be wordy. Consider a shorter alternative.
Context: ...--failed-mix deps.clean --all` is almost never needed. Avoid using it unless you...
(EN_WORDINESS_PREMIUM_ALMOST_NEVER)
[style] ~130-~130: In American English, abbreviations like “etc.” require a period.
Context: ...s to key elements (like forms, buttons, etc) when writing templates, these IDs can ...
(ETC_PERIOD)
[style] ~154-~154: You have already used this phrasing in nearby sentences. Consider replacing it to add variety to your writing.
Context: ...literal curly's like { or }. If you want to show a textual code snippet on the page...
(REP_WANT_TO_VB)
[style] ~362-~362: In American English, abbreviations like “etc.” require a period.
Context: ...element/2, has_element/2`, selectors, etc - Never test against raw HTML, **al...
(ETC_PERIOD)
🔇 Additional comments (3)
feedreader/priv/repo/migrations/20260324233336_migrate_resources1_dev.exs (1)
18-24: LGTM!The
down/0function correctly reverses the operations in proper order—dropping the index before removing the column—and usesdrop_if_existsfor defensive rollback behavior.feedreader/lib/feedreader/accounts/emails.ex (1)
14-17: Mailer result handling is cleanThe success/error tuple handling is straightforward and keeps failures explicit.
feedreader/assets/js/app.js (1)
21-48: LiveView bootstrap and navigation progress wiring look solid.
LiveSocketsetup, CSRF param injection, topbar lifecycle events, and window exposure are clean and consistent.
| def deliver_magic_link(user_or_email, token) do | ||
| email = | ||
| new() | ||
| |> to(user_or_email) | ||
| |> from({"Feedreader", "noreply@feedreader.local"}) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In the current AshAuthentication magic_link strategy, what concrete types can be passed as the first argument to the sender callback (e.g., user struct vs email string)?
💡 Result:
In the current AshAuthentication magic_link strategy, the first argument to the sender callback can be either the user struct (Ash.Resource.record when the token relates to an existing user) or the identity field value as a string (typically an email string when there is no matching user, such as during sign up).
Citations:
- 1: https://hexdocs.pm/ash_authentication/4.2.1/AshAuthentication.Strategy.MagicLink.html
- 2: https://preview.hex.pm/preview/ash_authentication/4.2.5/show/documentation/dsls/DSL:-AshAuthentication.Strategy.MagicLink.md
- 3: https://hexdocs.pm/ash_authentication/dsl-ashauthentication-strategy-magiclink.html
- 4: https://hexdocs.pm/ash_authentication/4.2.2/AshAuthentication.Strategy.MagicLink.html
- 5: https://hexdocs.pm/ash_authentication/magic-links.html
- 6: https://hexdocs.pm/ash_authentication/AshAuthentication.Sender.html
Handle both user struct and email string from magic_link callback
The deliver_magic_link/2 function on line 9 passes user_or_email directly to to() without normalization. Per AshAuthentication's magic_link strategy, this argument can be either a user struct (for existing users) or an email string (for new users). Passing a user struct directly to to() will fail at runtime.
Implement function overloads to handle both cases:
Suggested fix
-def deliver_magic_link(user_or_email, token) do
+def deliver_magic_link(%{email: email}, token), do: deliver_magic_link(email, token)
+
+def deliver_magic_link(user_or_email, token) when is_binary(user_or_email) do
email =
new()
|> to(user_or_email)
|> from({"Feedreader", "noreply@feedreader.local"})
|> subject("Your Magic Link")
|> text_body("Your magic link token is: #{token}")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def deliver_magic_link(user_or_email, token) do | |
| email = | |
| new() | |
| |> to(user_or_email) | |
| |> from({"Feedreader", "noreply@feedreader.local"}) | |
| def deliver_magic_link(%{email: email}, token), do: deliver_magic_link(email, token) | |
| def deliver_magic_link(user_or_email, token) when is_binary(user_or_email) do | |
| email = | |
| new() | |
| |> to(user_or_email) | |
| |> from({"Feedreader", "noreply@feedreader.local"}) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@feedreader/lib/feedreader/accounts/emails.ex` around lines 6 - 10,
deliver_magic_link/2 currently passes user_or_email straight into to(), which
can be a user struct or an email string; add two clause overloads for
deliver_magic_link: one that pattern-matches a User struct (e.g., %User{} in the
function head) and extracts the email (user.email or Map.get(user, :email))
before calling to(), and a second clause that accepts a plain email string and
uses it directly; update the implementation around to() in the function bodies
so both paths build the same email variable for the rest of the pipeline.
| alter table(:users) do | ||
| add(:email, :text, null: false) | ||
| end |
There was a problem hiding this comment.
Adding a non-null column without a default may fail on existing rows.
If the users table already contains rows, this migration will fail because existing records won't have a value for the new required email column. If this is intended only for fresh databases, consider documenting that assumption. Otherwise, provide a default or handle existing data:
🛠️ Option: Add a default for existing rows
alter table(:users) do
- add(:email, :text, null: false)
+ add(:email, :text, null: false, default: "")
endOr use a two-step migration: add nullable first, backfill data, then alter to non-null.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| alter table(:users) do | |
| add(:email, :text, null: false) | |
| end | |
| alter table(:users) do | |
| add(:email, :text, null: false, default: "") | |
| end |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@feedreader/priv/repo/migrations/20260324233336_migrate_resources1_dev.exs`
around lines 11 - 13, The migration currently uses alter table(:users) do
add(:email, :text, null: false) end which will fail on existing rows; change
this to either add the column with a safe default (e.g. add(:email, :text,
default: "unknown@example.com", null: false) then remove the default if needed)
or perform a two-step migration: first add(:email, :text, null: true), run a
backfill task to populate email for existing records (or set a sensible
default), and then run a subsequent migration that alters the column to null:
false; update the migration code to use table(:users) and the add(:email, ...)
statements accordingly.
The || chain in parse_feed treated empty strings from missing pubDate/ published elements as truthy, preventing fallback to <updated>. Replace with explicit nil/empty checks. Also gate entry:created broadcast on true inserts only by checking existence before upsert.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (2)
feedreader/lib/feedreader/workers/fetch_feed.ex (2)
21-25:⚠️ Potential issue | 🟠 MajorExistence detection is incorrect and marks missing rows as existing.
Line 23 matches
{:ok, _}which also matches{:ok, nil}. That makes new entries look “existing”, suppressing insert broadcasts/counts.Proposed fix
existed? = case Core.get_entry_by_feed_and_external_id(feed.id, entry.external_id) do - {:ok, _} -> true + {:ok, record} when not is_nil(record) -> true _ -> false end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/lib/feedreader/workers/fetch_feed.ex` around lines 21 - 25, The current existence check using Core.get_entry_by_feed_and_external_id(...) treats {:ok, nil} as a match and marks non-existent entries as existing; update the case to explicitly detect nil (e.g., match {:ok, entry} when not is_nil(entry) -> true, {:ok, nil} -> false, _ -> false) so existed? correctly reflects whether an actual entry struct was returned; reference the existed? binding and the Core.get_entry_by_feed_and_external_id call in fetch_feed.ex.
55-57:⚠️ Potential issue | 🔴 CriticalUnvalidated feed URL fetch allows SSRF targets (including redirect chains).
Line 56 fetches
urldirectly withReq.get/2. This needs strict URL/host validation (publichttp/httpsonly) and controlled redirect behavior before network calls.Suggested hardening direction
- case Req.get(url, receive_timeout: 30_000) do + with :ok <- validate_public_http_url(url), + {:ok, response} <- Req.get(url, receive_timeout: 30_000, redirect: false) do + case response do + %{status: 200, body: body} -> parse_feed(body) + %{status: status} -> {:error, "HTTP status: #{status}"} + end + else + {:error, reason} -> {:error, reason} + end#!/bin/bash # Verify current SSRF exposure points and redirect handling (read-only) rg -n --type=elixir -C3 '\bReq\.get\s*\(' feedreader/lib/feedreader/workers/fetch_feed.ex rg -n --type=elixir -C3 'feed_url' feedreader/lib/feedreader/core/feed.ex feedreader/lib/feedreader_web rg -n --type=elixir -C2 'redirect:\s*false|redirect:' feedreader/lib/feedreader/workers/fetch_feed.ex🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/lib/feedreader/workers/fetch_feed.ex` around lines 55 - 57, The fetch_feed function currently calls Req.get(url, ...) directly which allows SSRF via arbitrary schemes, hosts, and redirects; fix by parsing and validating the URL before the network call (create a helper like validate_feed_url/1 used by fetch_feed) to ensure the scheme is only "http" or "https", the host is present, and the resolved IP(s) are not private/loopback/link-local/multicast addresses; additionally, configure Req.get to disable or tightly control redirects (e.g., follow_redirects: false or max_redirects set) and, if you allow redirects, validate each redirect target with the same validate_feed_url/1 logic (or a validate_redirect/1 helper) before following; ensure fetch_feed returns an error for invalid URLs instead of calling Req.get.
🧹 Nitpick comments (1)
feedreader/test/feedreader/workers/fetch_feed_test.exs (1)
120-205:describe "perform/1"doesn’t currently exerciseFetchFeed.perform/1.Line 120 groups tests under worker behavior, but the cases only validate
Coreupsert/lookup logic. Consider either renaming this block or adding at least one true worker-path test (job args + fetch + parse + persist + broadcast) to prevent false confidence.Suggested minimal rename to avoid scope confusion
- describe "perform/1" do + describe "entry classification/upsert integration" do🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/test/feedreader/workers/fetch_feed_test.exs` around lines 120 - 205, The tests in the describe "perform/1" block never call FetchFeed.perform/1, so either rename the describe to reflect that they test Core upsert/lookup logic or add a worker-path test that invokes FetchFeed.perform/1 with appropriate job args and asserts the end-to-end behavior (fetch->parse->persist->broadcast); to fix, either (A) rename the describe to something like "Core upsert/lookup" and keep the existing tests as-is referencing Core.get_entry_by_feed_and_external_id and Core.upsert_from_feed!/upsert_from_feed, or (B) add a new test that builds job args (feed.id etc.), stubs/mocks the HTTP feed response and any broadcaster, calls FetchFeed.perform(%Oban.Job{args: args}) (or the module function signature used), and then asserts the created/updated entries via Core.list_entries!/Core.get_entry_by_feed_and_external_id and that broadcasts were emitted—use FetchFeed.perform/1, the feed fixture from setup, and existing Core helpers to locate the right code to change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@feedreader/lib/feedreader/workers/fetch_feed.ex`:
- Around line 193-197: The RFC822 timezone handling is using the wrong sign:
parse_tz_offset(tz_string) returns minutes for a "+HHMM" which must move UTC
backward, but the code does DateTime.add(datetime, offset, :minute); change the
sign before applying the offset (e.g., use DateTime.add(datetime, -offset,
:minute) or invert the value returned by parse_tz_offset) so that "+HHMM"
subtracts minutes and "-HHMM" adds minutes; update the logic around
tz_string/offset in the timezone adjustment block accordingly.
In `@feedreader/test/feedreader/core/entry_test.exs`:
- Around line 96-98: The test description is misleading: the test "returns error
when entry does not exist" asserts {:ok, nil} from
Core.get_entry_by_feed_and_external_id/2, so update the test name to reflect the
expected {:ok, nil} outcome (for example "returns {:ok, nil} when entry does not
exist" or "returns nil when entry does not exist") so the test string matches
the assertion and aids diagnosis; locate the test block containing the call to
Core.get_entry_by_feed_and_external_id/2 and change only the descriptive string.
---
Duplicate comments:
In `@feedreader/lib/feedreader/workers/fetch_feed.ex`:
- Around line 21-25: The current existence check using
Core.get_entry_by_feed_and_external_id(...) treats {:ok, nil} as a match and
marks non-existent entries as existing; update the case to explicitly detect nil
(e.g., match {:ok, entry} when not is_nil(entry) -> true, {:ok, nil} -> false, _
-> false) so existed? correctly reflects whether an actual entry struct was
returned; reference the existed? binding and the
Core.get_entry_by_feed_and_external_id call in fetch_feed.ex.
- Around line 55-57: The fetch_feed function currently calls Req.get(url, ...)
directly which allows SSRF via arbitrary schemes, hosts, and redirects; fix by
parsing and validating the URL before the network call (create a helper like
validate_feed_url/1 used by fetch_feed) to ensure the scheme is only "http" or
"https", the host is present, and the resolved IP(s) are not
private/loopback/link-local/multicast addresses; additionally, configure Req.get
to disable or tightly control redirects (e.g., follow_redirects: false or
max_redirects set) and, if you allow redirects, validate each redirect target
with the same validate_feed_url/1 logic (or a validate_redirect/1 helper) before
following; ensure fetch_feed returns an error for invalid URLs instead of
calling Req.get.
---
Nitpick comments:
In `@feedreader/test/feedreader/workers/fetch_feed_test.exs`:
- Around line 120-205: The tests in the describe "perform/1" block never call
FetchFeed.perform/1, so either rename the describe to reflect that they test
Core upsert/lookup logic or add a worker-path test that invokes
FetchFeed.perform/1 with appropriate job args and asserts the end-to-end
behavior (fetch->parse->persist->broadcast); to fix, either (A) rename the
describe to something like "Core upsert/lookup" and keep the existing tests
as-is referencing Core.get_entry_by_feed_and_external_id and
Core.upsert_from_feed!/upsert_from_feed, or (B) add a new test that builds job
args (feed.id etc.), stubs/mocks the HTTP feed response and any broadcaster,
calls FetchFeed.perform(%Oban.Job{args: args}) (or the module function signature
used), and then asserts the created/updated entries via
Core.list_entries!/Core.get_entry_by_feed_and_external_id and that broadcasts
were emitted—use FetchFeed.perform/1, the feed fixture from setup, and existing
Core helpers to locate the right code to change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 870ceab0-70c0-44e3-b604-6b170d801d16
📒 Files selected for processing (4)
feedreader/lib/feedreader/core.exfeedreader/lib/feedreader/workers/fetch_feed.exfeedreader/test/feedreader/core/entry_test.exsfeedreader/test/feedreader/workers/fetch_feed_test.exs
🚧 Files skipped from review as they are similar to previous changes (1)
- feedreader/lib/feedreader/core.ex
📜 Review details
🔇 Additional comments (2)
feedreader/test/feedreader/workers/fetch_feed_test.exs (1)
7-118: Strong parsing coverage for RSS/Atom/date fallbacks.Nice job covering missing dates, ISO8601, Atom
<updated>, and RSS priority behavior—this meaningfully hardens parser regressions.feedreader/test/feedreader/core/entry_test.exs (1)
11-327: Great coverage depth across toggles, filtering, ordering, and pagination.This is a strong foundational test suite for
FeedReader.Coreentry flows.
Negate tz offset when converting to UTC (+0500 means ahead of UTC, so
subtract). Fix existed? to not treat {:ok, nil} as a match. Correct
misleading test names and describe blocks.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
feedreader/lib/feedreader/workers/fetch_feed.ex (1)
41-42: Consider logging upsert failures for observability.Individual entry upsert errors are silently swallowed, which could hide validation failures or constraint violations. Adding a warning log would help with debugging without failing the entire job.
{:error, _} -> + Logger.warning("Failed to upsert entry: #{inspect(entry.external_id)}") {ins, upd}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@feedreader/lib/feedreader/workers/fetch_feed.ex` around lines 41 - 42, The {:error, _} -> {ins, upd} clause swallows upsert errors; change it to capture the error (e.g., {:error, err}) and emit a warning with context (use Logger.warn or Logger.error) including the err and identifying info like the entry id or title and the local variables ins/upd before returning {ins, upd}; update the clause in the same function that performs the per-entry upsert so failures are logged for observability.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@feedreader/lib/feedreader/workers/fetch_feed.ex`:
- Around line 41-42: The {:error, _} -> {ins, upd} clause swallows upsert
errors; change it to capture the error (e.g., {:error, err}) and emit a warning
with context (use Logger.warn or Logger.error) including the err and identifying
info like the entry id or title and the local variables ins/upd before returning
{ins, upd}; update the clause in the same function that performs the per-entry
upsert so failures are logged for observability.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: d7097ccf-e799-403e-bd3f-410788318893
📒 Files selected for processing (3)
feedreader/lib/feedreader/workers/fetch_feed.exfeedreader/test/feedreader/core/entry_test.exsfeedreader/test/feedreader/workers/fetch_feed_test.exs
✅ Files skipped from review due to trivial changes (1)
- feedreader/test/feedreader/core/entry_test.exs
📜 Review details
🔇 Additional comments (6)
feedreader/lib/feedreader/workers/fetch_feed.ex (3)
55-56: SSRF vulnerability: Block internal/private targets before fetching.The
feed_urlis user-controlled and fetched directly viaReq.get/2. Since Req follows redirects by default (up to 10 hops), an attacker could fetch internal resources (localhost, private IP ranges, cloud metadata at169.254.169.254) either directly or via redirect chains.Consider validating that
feed_urlresolves to a public IP before fetching, and either disable redirects or re-validate each redirect target.
68-143: Parse function correctly handles RSS and Atom formats.The implementation properly:
- Extracts both RSS
<item>and Atom<entry>elements- Handles Atom
link/@hrefattributes before falling back to text content- Uses explicit empty-string checks instead of truthy
||for xpath fallbacks- Gracefully catches XML parsing errors
146-247: Date parsing handles ISO8601 and RFC822 correctly.The implementation properly:
- Attempts ISO8601 first, falling back to RFC822
- Parses HHMM timezone offsets correctly (div/rem for hours/minutes)
- Applies negative offset to convert local time to UTC
- Handles common timezone abbreviations
feedreader/test/feedreader/workers/fetch_feed_test.exs (3)
8-97: Good coverage of RSS and Atom parsing edge cases.The tests comprehensively cover:
- RSS
pubDateextraction- Missing dates gracefully returning
nil- ISO8601 parsing with datetime field assertions
- Atom
<updated>fallback when<published>is absent- RFC822 timezone offset conversions (both + and -)
160-206: Solid upsert/lookup integration tests.The tests properly validate:
- Lookup returns
nilfor non-existent entries- Upsert creates and allows subsequent lookup
- Repeated upsert updates without creating duplicates
217-220: No issue found —content_linkis optional.The
content_linkattribute in the Entry resource hasallow_nil? true(entry.ex:88-90), making it optional. The test data at lines 218-219 is valid withoutcontent_link, and the upsert call will not fail due to its absence.> Likely an incorrect or invalid review comment.
Summary by CodeRabbit
New Features
Documentation
Tests
Chores