This is a feature most similar apps have, you can simply choose a model, choose the backend, configure the hyper parameters and the prefill and decode amount in tokens and start a test for the specific model on the specific device, it usually takes a couple minutes for larger models, simple.
After the test is done, it should display the average tokens per second (decode) and prefill tokens per second, this should work well if the device/ram detection feature which I requested before would be implemented.
This is a feature most similar apps have, you can simply choose a model, choose the backend, configure the hyper parameters and the prefill and decode amount in tokens and start a test for the specific model on the specific device, it usually takes a couple minutes for larger models, simple.
After the test is done, it should display the average tokens per second (decode) and prefill tokens per second, this should work well if the device/ram detection feature which I requested before would be implemented.