Skip to content

Conversation

@ShinDongWoon
Copy link
Owner

Summary

  • make SlidingWindowDataset emit one series per sample and update evaluation utilities to stack outputs per id
  • add per-layer period counters, tighten FFT period filtering, and lower the default k_periods to 2
  • refactor negative binomial masking/logging and simplify the warmup-to-cosine scheduler handling with safer min_sigma_vector slicing
  • optimize per-series window slicing to avoid redundant clones and log the per-epoch sample/batch workload for clarity

Testing

  • pytest

https://chatgpt.com/codex/tasks/task_e_68d4df402ba88328b5937643c21c75d2

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting

Comment on lines 37 to +48
alpha = torch.clamp(dispersion, min=eps)
mu = torch.clamp(rate, min=eps)
r = 1.0 / alpha
log_p = torch.log(r) - torch.log(r + mu)
log1m_p = torch.log(mu) - torch.log(r + mu)
log_prob = (
torch.lgamma(y + r)
- torch.lgamma(r)
log1p_alpha_mu = torch.log1p(alpha * mu)
log_alpha = torch.log(alpha)
log_mu = torch.log(mu)
inv_alpha = torch.reciprocal(alpha)
ll = (
torch.lgamma(y + inv_alpha)
- torch.lgamma(inv_alpha)
- torch.lgamma(y + 1.0)
+ r * log_p
+ y * log1m_p
+ inv_alpha * (-log1p_alpha_mu)
+ y * (log_alpha + log_mu - log1p_alpha_mu)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[P1] Restore missing log(alpha) term in NB likelihood

The refactored negative_binomial_nll rewrites the log‑likelihood but omits the -(1/alpha) * log(alpha) component that was implicitly present before through r * log_p. The new implementation only subtracts inv_alpha * log1p(alpha*mu) and thus produces a value that differs from the standard negative‑binomial log‑likelihood for any dispersion ≠ 1. This changes both the loss scale and the gradient with respect to dispersion, so training will optimize the wrong objective and can no longer match the behaviour of the previous formulation. Reintroducing the -inv_alpha * log_alpha term before masking fixes the derivation.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants