Skip to content

Updated leaderboards on the benchmark #32

@cvg25

Description

@cvg25

Hello @seohongpark,
First of all, thanks for putting together such a great benchmark suite for offline goal-conditioned learning. I have a couple questions:

  1. I have been working from some months now on a new method and seems like for most tasks it beats the leaderboards published in the project website. However, I'd like to know if these learderboards are updated or are there any new SOTA models. Although, I am still in a experimental phase, I'd like to put toghether a paper if results continue to be as good :)

  2. Regarding one of the research questions in the project page:

How can we combine expressive policies with goal-conditioned RL? If we compare the results on cube-single-play and cube-single-noisy, we can see that current offline goal-conditioned RL methods often struggle with datasets collected by non-Markovian policies in manipulation environments. Handling non-Markovian trajectory data is indeed one of the major challenges in behavioral cloning, for which many recent behavioral cloning-based methods have been proposed (e.g., ACT and Diffusion Policy). Can we incorporate these recent advancements in behavioral cloning into offline goal-conditioned RL?

My questions is, what prevents Diffusion Policy from being directly applicable to offline goal-conditioned RL problems like ogbench?

Regards,
Carlos

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions