You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for the great library and the clear implementation of marginal message passing.
I have a question regarding the definition of future_cutoff and the handling of future messages at the end of the inference window in pymdp/algos/mmp.py.
In run_mmp_factorized, we have the following logic:
# window
past_len = len(lh_seq)
future_len = policy.shape[0]
if last_timestep:
infer_len = past_len + future_len - 1
else:
infer_len = past_len + future_len
future_cutoff = past_len + future_len - 2
....
for itr in range(num_iter):
for t in range(infer_len):
for f in range(num_factors):
....
# future message
if t >= future_cutoff:
lnB_future = qs_T[f]
else:
...
With future_cutoff = past_len + future_len - 2 and the condition if t >= future_cutoff:, the future message appears to be dropped for the last two timesteps:
t = past_len + future_len - 2 (second-to-last timestep)
t = past_len + future_len - 1 (last timestep)
From a technical perspective, the future message could still be computed at t = infer_len - 2 (since qs_seq[t+1] exists), and strictly speaking, qs_seq[t+1] is only out of range at t = infer_len - 1.
I have a few questions regarding this:
Is there a specific theoretical or numerical reason to start dropping the future message at t = past_len + future_len - 2, rather than only at the very last timestep t = infer_len - 1?
Is this intended as a direct port of Karl Friston’s SPM implementation, or is it a heuristic for edge handling specific to pymdp?
If I wanted the behavior where the future message is dropped only when t+1 truly doesn't exist, would it be valid to simply set future_cutoff = past_len + future_len - 1?
I would appreciate any clarification on the intended edge-handling logic and whether the current choice is critical for numerical stability.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, thanks for the great library and the clear implementation of marginal message passing.
I have a question regarding the definition of future_cutoff and the handling of future messages at the end of the inference window in pymdp/algos/mmp.py.
In run_mmp_factorized, we have the following logic:
With future_cutoff = past_len + future_len - 2 and the condition if t >= future_cutoff:, the future message appears to be dropped for the last two timesteps:
From a technical perspective, the future message could still be computed at t = infer_len - 2 (since qs_seq[t+1] exists), and strictly speaking, qs_seq[t+1] is only out of range at t = infer_len - 1.
I have a few questions regarding this:
I would appreciate any clarification on the intended edge-handling logic and whether the current choice is critical for numerical stability.
Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions