Skip to content

Performance Discrepancy in LightRAG & Inquiry about Implementation Alignment #86

@xucf04

Description

@xucf04

Dear Authors, thank you for your work! I am currently working on a new benchmarking study and I am exploring the possibility of using your framework to conduct some of my experiments.

To validate the alignment, I recently conducted a preliminary test using the LightRAG implementation within your framework and compared it against the original author's codebase. I used a subset of the multi-hop dataset Musique (as used in the IRCoT paper). Original LightRAG Implementation is 0.26 Accuracy and Your framework Implementation is 0.17 Accuracy.

I think these differences might be due to missing implementation details or specific "tricks" present in the original work (e.g., constraints on token counts for local and global retrieval modes, or specific hyperparameter settings that might differ here).

So I have a question, are there known design choices or implementation differences for other methods in this framework compared to their original official codebases? I am wondering if these potential deviations might generally impact the performance benchmarks across different methods.

I think your response will potentially save me a huge amount of time debugging :) Thank you very much for your time !

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions