Pinned Loading
-
LLM-Evaluation-Framework
LLM-Evaluation-Framework PublicAn open, reproducible evaluation of modern Large Language Models (LLMs) on the GATE question papers.
Python 1
-
-
mlperf-automations
mlperf-automations PublicForked from mlcommons/mlperf-automations
This repository contains automation scripts designed to run MLPerf Inference benchmarks. Originally developed for the Collective Mind (CM) automation framework, these scripts have been adapted to l…
Python
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.



