Dear @idejie
Thank you for sharing the code for your impressive work — it's been a pleasure to explore it.
I had a question regarding the Visual State Interaction. In the paper, it appears that the interaction is described as occurring specifically between the visual embeddings of the start and end frames:

However, in the code https://github.com/idejie/PlanLLM/blob/58b2a2e03ec84c375b9b2fe2d531d5878954312e/models/schema/state_encoder.py#L79C9-L79C56, the variable state_feat seems to include not only $x_{0}^{v}$ (the embedding of the start) and $x_{T}^{v}$ (the embedding of the end), but also features from intermediate frames (as indicated in the comment on this line https://github.com/idejie/PlanLLM/blob/58b2a2e03ec84c375b9b2fe2d531d5878954312e/models/schema/state_encoder.py#L74C11-L74C40).
Am I missing something? Could you please clarify whether this broader inclusion is intentional in the context of the visual interaction module?
Thanks in advance for your time and insights!
Best regards,
Luigi
Dear @idejie
Thank you for sharing the code for your impressive work — it's been a pleasure to explore it.
I had a question regarding the Visual State Interaction. In the paper, it appears that the interaction is described as occurring specifically between the visual embeddings of the start and end frames:
However, in the code https://github.com/idejie/PlanLLM/blob/58b2a2e03ec84c375b9b2fe2d531d5878954312e/models/schema/state_encoder.py#L79C9-L79C56, the variable$x_{0}^{v}$ (the embedding of the start) and $x_{T}^{v}$ (the embedding of the end), but also features from intermediate frames (as indicated in the comment on this line https://github.com/idejie/PlanLLM/blob/58b2a2e03ec84c375b9b2fe2d531d5878954312e/models/schema/state_encoder.py#L74C11-L74C40).
state_featseems to include not onlyAm I missing something? Could you please clarify whether this broader inclusion is intentional in the context of the visual interaction module?
Thanks in advance for your time and insights!
Best regards,
Luigi